00:00:00.001 Started by upstream project "autotest-nightly" build number 4173 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3535 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.124 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.125 The recommended git tool is: git 00:00:00.125 using credential 00000000-0000-0000-0000-000000000002 00:00:00.126 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.188 Fetching changes from the remote Git repository 00:00:00.190 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.250 Using shallow fetch with depth 1 00:00:00.250 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.250 > git --version # timeout=10 00:00:00.301 > git --version # 'git version 2.39.2' 00:00:00.301 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.337 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.337 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.693 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.704 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.716 Checking out Revision bc56972291bf21b4d2a602b495a165146a8d67a1 (FETCH_HEAD) 00:00:06.716 > git config core.sparsecheckout # timeout=10 00:00:06.727 > git read-tree -mu HEAD # timeout=10 00:00:06.742 > git checkout -f bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=5 00:00:06.759 Commit message: "jenkins/jjb-config: Remove extendedChoice from ipxe-test-images" 00:00:06.759 > git rev-list --no-walk bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=10 00:00:06.843 [Pipeline] Start of Pipeline 00:00:06.860 [Pipeline] library 00:00:06.862 Loading library shm_lib@master 00:00:06.863 Library shm_lib@master is cached. Copying from home. 00:00:06.880 [Pipeline] node 00:00:06.898 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:06.900 [Pipeline] { 00:00:06.911 [Pipeline] catchError 00:00:06.913 [Pipeline] { 00:00:06.925 [Pipeline] wrap 00:00:06.933 [Pipeline] { 00:00:06.942 [Pipeline] stage 00:00:06.944 [Pipeline] { (Prologue) 00:00:06.960 [Pipeline] echo 00:00:06.962 Node: VM-host-SM38 00:00:06.969 [Pipeline] cleanWs 00:00:06.981 [WS-CLEANUP] Deleting project workspace... 00:00:06.981 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.989 [WS-CLEANUP] done 00:00:07.194 [Pipeline] setCustomBuildProperty 00:00:07.281 [Pipeline] httpRequest 00:00:07.688 [Pipeline] echo 00:00:07.689 Sorcerer 10.211.164.101 is alive 00:00:07.698 [Pipeline] retry 00:00:07.700 [Pipeline] { 00:00:07.709 [Pipeline] httpRequest 00:00:07.713 HttpMethod: GET 00:00:07.714 URL: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:07.715 Sending request to url: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:07.729 Response Code: HTTP/1.1 200 OK 00:00:07.729 Success: Status code 200 is in the accepted range: 200,404 00:00:07.730 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:12.662 [Pipeline] } 00:00:12.679 [Pipeline] // retry 00:00:12.687 [Pipeline] sh 00:00:12.973 + tar --no-same-owner -xf jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:12.989 [Pipeline] httpRequest 00:00:13.398 [Pipeline] echo 00:00:13.400 Sorcerer 10.211.164.101 is alive 00:00:13.409 [Pipeline] retry 00:00:13.411 [Pipeline] { 00:00:13.425 [Pipeline] httpRequest 00:00:13.430 HttpMethod: GET 00:00:13.430 URL: http://10.211.164.101/packages/spdk_bbce7a87401bc737804431cd08d24fede99b1400.tar.gz 00:00:13.431 Sending request to url: http://10.211.164.101/packages/spdk_bbce7a87401bc737804431cd08d24fede99b1400.tar.gz 00:00:13.437 Response Code: HTTP/1.1 200 OK 00:00:13.438 Success: Status code 200 is in the accepted range: 200,404 00:00:13.438 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_bbce7a87401bc737804431cd08d24fede99b1400.tar.gz 00:00:42.616 [Pipeline] } 00:00:42.634 [Pipeline] // retry 00:00:42.642 [Pipeline] sh 00:00:42.927 + tar --no-same-owner -xf spdk_bbce7a87401bc737804431cd08d24fede99b1400.tar.gz 00:00:45.481 [Pipeline] sh 00:00:45.762 + git -C spdk log --oneline -n5 00:00:45.762 bbce7a874 event: move struct spdk_lw_thread to internal header 00:00:45.762 5031f0f3b module/raid: Assign bdev_io buffers to raid_io 00:00:45.762 dc3ea9d27 bdevperf: Allocate an md buffer for verify op 00:00:45.762 0ce363beb spdk_log: introduce spdk_log_ext API 00:00:45.762 412fced1b bdev/compress: unmap support. 00:00:45.780 [Pipeline] writeFile 00:00:45.793 [Pipeline] sh 00:00:46.078 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:46.092 [Pipeline] sh 00:00:46.375 + cat autorun-spdk.conf 00:00:46.375 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:46.375 SPDK_TEST_NVME=1 00:00:46.375 SPDK_TEST_FTL=1 00:00:46.375 SPDK_TEST_ISAL=1 00:00:46.375 SPDK_RUN_ASAN=1 00:00:46.375 SPDK_RUN_UBSAN=1 00:00:46.375 SPDK_TEST_XNVME=1 00:00:46.375 SPDK_TEST_NVME_FDP=1 00:00:46.375 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:46.384 RUN_NIGHTLY=1 00:00:46.385 [Pipeline] } 00:00:46.398 [Pipeline] // stage 00:00:46.413 [Pipeline] stage 00:00:46.416 [Pipeline] { (Run VM) 00:00:46.428 [Pipeline] sh 00:00:46.714 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:46.714 + echo 'Start stage prepare_nvme.sh' 00:00:46.714 Start stage prepare_nvme.sh 00:00:46.714 + [[ -n 1 ]] 00:00:46.714 + disk_prefix=ex1 00:00:46.714 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:00:46.714 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:00:46.714 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:00:46.714 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:46.714 ++ SPDK_TEST_NVME=1 00:00:46.714 ++ SPDK_TEST_FTL=1 00:00:46.714 ++ SPDK_TEST_ISAL=1 00:00:46.714 ++ SPDK_RUN_ASAN=1 00:00:46.714 ++ SPDK_RUN_UBSAN=1 00:00:46.714 ++ SPDK_TEST_XNVME=1 00:00:46.714 ++ SPDK_TEST_NVME_FDP=1 00:00:46.714 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:46.714 ++ RUN_NIGHTLY=1 00:00:46.714 + cd /var/jenkins/workspace/nvme-vg-autotest 00:00:46.714 + nvme_files=() 00:00:46.714 + declare -A nvme_files 00:00:46.714 + backend_dir=/var/lib/libvirt/images/backends 00:00:46.714 + nvme_files['nvme.img']=5G 00:00:46.714 + nvme_files['nvme-cmb.img']=5G 00:00:46.714 + nvme_files['nvme-multi0.img']=4G 00:00:46.714 + nvme_files['nvme-multi1.img']=4G 00:00:46.714 + nvme_files['nvme-multi2.img']=4G 00:00:46.714 + nvme_files['nvme-openstack.img']=8G 00:00:46.714 + nvme_files['nvme-zns.img']=5G 00:00:46.714 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:46.714 + (( SPDK_TEST_FTL == 1 )) 00:00:46.714 + nvme_files["nvme-ftl.img"]=6G 00:00:46.714 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:46.714 + nvme_files["nvme-fdp.img"]=1G 00:00:46.714 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:46.714 + for nvme in "${!nvme_files[@]}" 00:00:46.714 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:00:46.714 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:46.714 + for nvme in "${!nvme_files[@]}" 00:00:46.714 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-ftl.img -s 6G 00:00:46.714 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:46.714 + for nvme in "${!nvme_files[@]}" 00:00:46.714 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:00:46.714 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:46.714 + for nvme in "${!nvme_files[@]}" 00:00:46.714 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:00:46.714 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:46.714 + for nvme in "${!nvme_files[@]}" 00:00:46.714 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:00:46.714 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:46.714 + for nvme in "${!nvme_files[@]}" 00:00:46.714 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:00:46.975 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:46.975 + for nvme in "${!nvme_files[@]}" 00:00:46.975 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:00:46.975 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:46.975 + for nvme in "${!nvme_files[@]}" 00:00:46.975 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-fdp.img -s 1G 00:00:46.975 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:46.975 + for nvme in "${!nvme_files[@]}" 00:00:46.975 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:00:46.975 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:46.975 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:00:46.975 + echo 'End stage prepare_nvme.sh' 00:00:46.975 End stage prepare_nvme.sh 00:00:46.989 [Pipeline] sh 00:00:47.325 + DISTRO=fedora39 00:00:47.325 + CPUS=10 00:00:47.325 + RAM=12288 00:00:47.325 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:47.325 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex1-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:00:47.325 00:00:47.325 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:00:47.325 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:00:47.325 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:00:47.325 HELP=0 00:00:47.325 DRY_RUN=0 00:00:47.325 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,/var/lib/libvirt/images/backends/ex1-nvme-fdp.img, 00:00:47.325 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:47.325 NVME_AUTO_CREATE=0 00:00:47.325 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,, 00:00:47.325 NVME_CMB=,,,, 00:00:47.325 NVME_PMR=,,,, 00:00:47.325 NVME_ZNS=,,,, 00:00:47.325 NVME_MS=true,,,, 00:00:47.325 NVME_FDP=,,,on, 00:00:47.325 SPDK_VAGRANT_DISTRO=fedora39 00:00:47.325 SPDK_VAGRANT_VMCPU=10 00:00:47.325 SPDK_VAGRANT_VMRAM=12288 00:00:47.325 SPDK_VAGRANT_PROVIDER=libvirt 00:00:47.325 SPDK_VAGRANT_HTTP_PROXY= 00:00:47.325 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:47.325 SPDK_OPENSTACK_NETWORK=0 00:00:47.325 VAGRANT_PACKAGE_BOX=0 00:00:47.325 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:47.325 FORCE_DISTRO=true 00:00:47.325 VAGRANT_BOX_VERSION= 00:00:47.325 EXTRA_VAGRANTFILES= 00:00:47.325 NIC_MODEL=e1000 00:00:47.325 00:00:47.325 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:00:47.325 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:00:49.877 Bringing machine 'default' up with 'libvirt' provider... 00:00:50.138 ==> default: Creating image (snapshot of base box volume). 00:00:50.399 ==> default: Creating domain with the following settings... 00:00:50.399 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1728840639_c684487813afbf2af8fe 00:00:50.399 ==> default: -- Domain type: kvm 00:00:50.399 ==> default: -- Cpus: 10 00:00:50.399 ==> default: -- Feature: acpi 00:00:50.399 ==> default: -- Feature: apic 00:00:50.399 ==> default: -- Feature: pae 00:00:50.399 ==> default: -- Memory: 12288M 00:00:50.399 ==> default: -- Memory Backing: hugepages: 00:00:50.399 ==> default: -- Management MAC: 00:00:50.399 ==> default: -- Loader: 00:00:50.399 ==> default: -- Nvram: 00:00:50.399 ==> default: -- Base box: spdk/fedora39 00:00:50.399 ==> default: -- Storage pool: default 00:00:50.399 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1728840639_c684487813afbf2af8fe.img (20G) 00:00:50.399 ==> default: -- Volume Cache: default 00:00:50.399 ==> default: -- Kernel: 00:00:50.399 ==> default: -- Initrd: 00:00:50.399 ==> default: -- Graphics Type: vnc 00:00:50.399 ==> default: -- Graphics Port: -1 00:00:50.399 ==> default: -- Graphics IP: 127.0.0.1 00:00:50.399 ==> default: -- Graphics Password: Not defined 00:00:50.399 ==> default: -- Video Type: cirrus 00:00:50.399 ==> default: -- Video VRAM: 9216 00:00:50.399 ==> default: -- Sound Type: 00:00:50.399 ==> default: -- Keymap: en-us 00:00:50.399 ==> default: -- TPM Path: 00:00:50.399 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:50.399 ==> default: -- Command line args: 00:00:50.399 ==> default: -> value=-device, 00:00:50.399 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:50.399 ==> default: -> value=-drive, 00:00:50.399 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:00:50.399 ==> default: -> value=-device, 00:00:50.399 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:00:50.399 ==> default: -> value=-device, 00:00:50.399 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:50.399 ==> default: -> value=-drive, 00:00:50.399 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-1-drive0, 00:00:50.399 ==> default: -> value=-device, 00:00:50.399 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:50.399 ==> default: -> value=-device, 00:00:50.399 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:00:50.399 ==> default: -> value=-drive, 00:00:50.399 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:00:50.399 ==> default: -> value=-device, 00:00:50.399 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:50.399 ==> default: -> value=-drive, 00:00:50.399 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:00:50.399 ==> default: -> value=-device, 00:00:50.399 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:50.399 ==> default: -> value=-drive, 00:00:50.399 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:00:50.399 ==> default: -> value=-device, 00:00:50.400 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:50.400 ==> default: -> value=-device, 00:00:50.400 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:00:50.400 ==> default: -> value=-device, 00:00:50.400 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:00:50.400 ==> default: -> value=-drive, 00:00:50.400 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:00:50.400 ==> default: -> value=-device, 00:00:50.400 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:50.400 ==> default: Creating shared folders metadata... 00:00:50.660 ==> default: Starting domain. 00:00:52.039 ==> default: Waiting for domain to get an IP address... 00:01:14.014 ==> default: Waiting for SSH to become available... 00:01:14.014 ==> default: Configuring and enabling network interfaces... 00:01:15.957 default: SSH address: 192.168.121.19:22 00:01:15.957 default: SSH username: vagrant 00:01:15.957 default: SSH auth method: private key 00:01:17.874 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:27.965 ==> default: Mounting SSHFS shared folder... 00:01:28.538 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:28.538 ==> default: Checking Mount.. 00:01:29.926 ==> default: Folder Successfully Mounted! 00:01:29.926 00:01:29.926 SUCCESS! 00:01:29.926 00:01:29.926 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:29.926 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:29.926 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:29.926 00:01:29.937 [Pipeline] } 00:01:29.954 [Pipeline] // stage 00:01:29.963 [Pipeline] dir 00:01:29.964 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:29.966 [Pipeline] { 00:01:29.980 [Pipeline] catchError 00:01:29.982 [Pipeline] { 00:01:29.996 [Pipeline] sh 00:01:30.282 + vagrant ssh-config --host vagrant 00:01:30.282 + sed -ne '/^Host/,$p' 00:01:30.282 + tee ssh_conf 00:01:32.890 Host vagrant 00:01:32.890 HostName 192.168.121.19 00:01:32.890 User vagrant 00:01:32.890 Port 22 00:01:32.890 UserKnownHostsFile /dev/null 00:01:32.890 StrictHostKeyChecking no 00:01:32.890 PasswordAuthentication no 00:01:32.890 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:32.890 IdentitiesOnly yes 00:01:32.890 LogLevel FATAL 00:01:32.890 ForwardAgent yes 00:01:32.890 ForwardX11 yes 00:01:32.890 00:01:32.905 [Pipeline] withEnv 00:01:32.906 [Pipeline] { 00:01:32.920 [Pipeline] sh 00:01:33.205 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:33.205 source /etc/os-release 00:01:33.205 [[ -e /image.version ]] && img=$(< /image.version) 00:01:33.205 # Minimal, systemd-like check. 00:01:33.205 if [[ -e /.dockerenv ]]; then 00:01:33.205 # Clear garbage from the node'\''s name: 00:01:33.205 # agt-er_autotest_547-896 -> autotest_547-896 00:01:33.205 # $HOSTNAME is the actual container id 00:01:33.205 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:33.205 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:33.206 # We can assume this is a mount from a host where container is running, 00:01:33.206 # so fetch its hostname to easily identify the target swarm worker. 00:01:33.206 container="$(< /etc/hostname) ($agent)" 00:01:33.206 else 00:01:33.206 # Fallback 00:01:33.206 container=$agent 00:01:33.206 fi 00:01:33.206 fi 00:01:33.206 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:33.206 ' 00:01:33.481 [Pipeline] } 00:01:33.497 [Pipeline] // withEnv 00:01:33.506 [Pipeline] setCustomBuildProperty 00:01:33.520 [Pipeline] stage 00:01:33.522 [Pipeline] { (Tests) 00:01:33.538 [Pipeline] sh 00:01:33.823 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:34.099 [Pipeline] sh 00:01:34.383 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:34.658 [Pipeline] timeout 00:01:34.658 Timeout set to expire in 50 min 00:01:34.660 [Pipeline] { 00:01:34.673 [Pipeline] sh 00:01:34.958 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:01:35.531 HEAD is now at bbce7a874 event: move struct spdk_lw_thread to internal header 00:01:35.544 [Pipeline] sh 00:01:35.828 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:01:36.103 [Pipeline] sh 00:01:36.387 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:36.666 [Pipeline] sh 00:01:36.968 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:01:36.968 ++ readlink -f spdk_repo 00:01:37.229 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:37.229 + [[ -n /home/vagrant/spdk_repo ]] 00:01:37.229 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:37.229 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:37.229 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:37.229 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:37.229 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:37.229 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:37.229 + cd /home/vagrant/spdk_repo 00:01:37.229 + source /etc/os-release 00:01:37.229 ++ NAME='Fedora Linux' 00:01:37.229 ++ VERSION='39 (Cloud Edition)' 00:01:37.229 ++ ID=fedora 00:01:37.229 ++ VERSION_ID=39 00:01:37.229 ++ VERSION_CODENAME= 00:01:37.229 ++ PLATFORM_ID=platform:f39 00:01:37.229 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:37.229 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:37.229 ++ LOGO=fedora-logo-icon 00:01:37.229 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:37.229 ++ HOME_URL=https://fedoraproject.org/ 00:01:37.229 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:37.229 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:37.229 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:37.229 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:37.229 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:37.229 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:37.229 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:37.229 ++ SUPPORT_END=2024-11-12 00:01:37.229 ++ VARIANT='Cloud Edition' 00:01:37.229 ++ VARIANT_ID=cloud 00:01:37.229 + uname -a 00:01:37.229 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:37.229 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:37.491 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:37.752 Hugepages 00:01:37.752 node hugesize free / total 00:01:37.752 node0 1048576kB 0 / 0 00:01:37.752 node0 2048kB 0 / 0 00:01:37.752 00:01:37.752 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:37.752 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:38.013 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:38.013 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:38.013 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:38.013 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:01:38.013 + rm -f /tmp/spdk-ld-path 00:01:38.013 + source autorun-spdk.conf 00:01:38.013 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:38.013 ++ SPDK_TEST_NVME=1 00:01:38.013 ++ SPDK_TEST_FTL=1 00:01:38.013 ++ SPDK_TEST_ISAL=1 00:01:38.013 ++ SPDK_RUN_ASAN=1 00:01:38.013 ++ SPDK_RUN_UBSAN=1 00:01:38.013 ++ SPDK_TEST_XNVME=1 00:01:38.013 ++ SPDK_TEST_NVME_FDP=1 00:01:38.013 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:38.013 ++ RUN_NIGHTLY=1 00:01:38.013 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:38.013 + [[ -n '' ]] 00:01:38.013 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:38.013 + for M in /var/spdk/build-*-manifest.txt 00:01:38.013 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:38.013 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:38.013 + for M in /var/spdk/build-*-manifest.txt 00:01:38.013 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:38.013 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:38.013 + for M in /var/spdk/build-*-manifest.txt 00:01:38.013 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:38.013 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:38.013 ++ uname 00:01:38.013 + [[ Linux == \L\i\n\u\x ]] 00:01:38.013 + sudo dmesg -T 00:01:38.013 + sudo dmesg --clear 00:01:38.013 + dmesg_pid=5028 00:01:38.013 + [[ Fedora Linux == FreeBSD ]] 00:01:38.013 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:38.013 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:38.013 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:38.013 + [[ -x /usr/src/fio-static/fio ]] 00:01:38.013 + sudo dmesg -Tw 00:01:38.013 + export FIO_BIN=/usr/src/fio-static/fio 00:01:38.013 + FIO_BIN=/usr/src/fio-static/fio 00:01:38.013 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:38.013 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:38.013 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:38.013 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:38.013 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:38.013 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:38.013 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:38.013 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:38.013 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:38.013 Test configuration: 00:01:38.013 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:38.013 SPDK_TEST_NVME=1 00:01:38.013 SPDK_TEST_FTL=1 00:01:38.013 SPDK_TEST_ISAL=1 00:01:38.013 SPDK_RUN_ASAN=1 00:01:38.013 SPDK_RUN_UBSAN=1 00:01:38.013 SPDK_TEST_XNVME=1 00:01:38.013 SPDK_TEST_NVME_FDP=1 00:01:38.013 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:38.278 RUN_NIGHTLY=1 17:31:27 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:38.278 17:31:27 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:38.278 17:31:27 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:38.278 17:31:27 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:38.278 17:31:27 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:38.278 17:31:27 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:38.278 17:31:27 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.278 17:31:27 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.278 17:31:27 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.278 17:31:27 -- paths/export.sh@5 -- $ export PATH 00:01:38.278 17:31:27 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.278 17:31:27 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:38.278 17:31:27 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:38.278 17:31:27 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728840687.XXXXXX 00:01:38.278 17:31:27 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728840687.E4cjX3 00:01:38.278 17:31:27 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:38.278 17:31:27 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:38.278 17:31:27 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:38.278 17:31:27 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:38.278 17:31:27 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:38.278 17:31:27 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:38.278 17:31:27 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:38.278 17:31:27 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.278 17:31:27 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:38.278 17:31:27 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:38.278 17:31:27 -- pm/common@17 -- $ local monitor 00:01:38.278 17:31:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:38.278 17:31:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:38.278 17:31:27 -- pm/common@25 -- $ sleep 1 00:01:38.278 17:31:27 -- pm/common@21 -- $ date +%s 00:01:38.278 17:31:27 -- pm/common@21 -- $ date +%s 00:01:38.278 17:31:27 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728840687 00:01:38.278 17:31:27 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728840687 00:01:38.278 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728840687_collect-vmstat.pm.log 00:01:38.278 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728840687_collect-cpu-load.pm.log 00:01:39.226 17:31:28 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:39.226 17:31:28 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:39.226 17:31:28 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:39.226 17:31:28 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:39.226 17:31:28 -- spdk/autobuild.sh@16 -- $ date -u 00:01:39.226 Sun Oct 13 05:31:28 PM UTC 2024 00:01:39.226 17:31:28 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:39.226 v25.01-pre-55-gbbce7a874 00:01:39.226 17:31:28 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:39.226 17:31:28 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:39.226 17:31:28 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:39.226 17:31:28 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:39.226 17:31:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:39.226 ************************************ 00:01:39.226 START TEST asan 00:01:39.226 ************************************ 00:01:39.226 using asan 00:01:39.226 17:31:28 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:01:39.226 00:01:39.226 real 0m0.000s 00:01:39.226 user 0m0.000s 00:01:39.226 sys 0m0.000s 00:01:39.226 ************************************ 00:01:39.226 END TEST asan 00:01:39.226 ************************************ 00:01:39.226 17:31:28 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:39.226 17:31:28 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:39.226 17:31:29 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:39.226 17:31:29 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:39.226 17:31:29 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:39.226 17:31:29 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:39.226 17:31:29 -- common/autotest_common.sh@10 -- $ set +x 00:01:39.226 ************************************ 00:01:39.226 START TEST ubsan 00:01:39.226 ************************************ 00:01:39.226 using ubsan 00:01:39.226 17:31:29 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:39.226 00:01:39.226 real 0m0.000s 00:01:39.226 user 0m0.000s 00:01:39.226 sys 0m0.000s 00:01:39.226 ************************************ 00:01:39.226 END TEST ubsan 00:01:39.226 ************************************ 00:01:39.226 17:31:29 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:39.226 17:31:29 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:39.488 17:31:29 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:39.488 17:31:29 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:39.488 17:31:29 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:39.488 17:31:29 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:39.488 17:31:29 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:39.488 17:31:29 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:39.488 17:31:29 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:39.488 17:31:29 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:39.488 17:31:29 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:39.488 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:39.488 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:40.060 Using 'verbs' RDMA provider 00:01:53.236 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:03.219 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:03.219 Creating mk/config.mk...done. 00:02:03.219 Creating mk/cc.flags.mk...done. 00:02:03.219 Type 'make' to build. 00:02:03.219 17:31:52 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:03.219 17:31:52 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:03.219 17:31:52 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:03.219 17:31:52 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.219 ************************************ 00:02:03.219 START TEST make 00:02:03.219 ************************************ 00:02:03.219 17:31:52 make -- common/autotest_common.sh@1125 -- $ make -j10 00:02:03.219 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:03.219 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:03.219 meson setup builddir \ 00:02:03.219 -Dwith-libaio=enabled \ 00:02:03.219 -Dwith-liburing=enabled \ 00:02:03.219 -Dwith-libvfn=disabled \ 00:02:03.219 -Dwith-spdk=false && \ 00:02:03.219 meson compile -C builddir && \ 00:02:03.219 cd -) 00:02:03.219 make[1]: Nothing to be done for 'all'. 00:02:05.745 The Meson build system 00:02:05.745 Version: 1.5.0 00:02:05.745 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:05.745 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:05.745 Build type: native build 00:02:05.745 Project name: xnvme 00:02:05.745 Project version: 0.7.3 00:02:05.745 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:05.745 C linker for the host machine: cc ld.bfd 2.40-14 00:02:05.745 Host machine cpu family: x86_64 00:02:05.745 Host machine cpu: x86_64 00:02:05.745 Message: host_machine.system: linux 00:02:05.745 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:05.745 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:05.745 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:05.745 Run-time dependency threads found: YES 00:02:05.745 Has header "setupapi.h" : NO 00:02:05.745 Has header "linux/blkzoned.h" : YES 00:02:05.745 Has header "linux/blkzoned.h" : YES (cached) 00:02:05.745 Has header "libaio.h" : YES 00:02:05.745 Library aio found: YES 00:02:05.745 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:05.745 Run-time dependency liburing found: YES 2.2 00:02:05.745 Dependency libvfn skipped: feature with-libvfn disabled 00:02:05.745 Run-time dependency appleframeworks found: NO (tried framework) 00:02:05.745 Run-time dependency appleframeworks found: NO (tried framework) 00:02:05.745 Configuring xnvme_config.h using configuration 00:02:05.745 Configuring xnvme.spec using configuration 00:02:05.745 Run-time dependency bash-completion found: YES 2.11 00:02:05.745 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:05.745 Program cp found: YES (/usr/bin/cp) 00:02:05.745 Has header "winsock2.h" : NO 00:02:05.745 Has header "dbghelp.h" : NO 00:02:05.745 Library rpcrt4 found: NO 00:02:05.745 Library rt found: YES 00:02:05.745 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:05.745 Found CMake: /usr/bin/cmake (3.27.7) 00:02:05.745 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:02:05.745 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:02:05.745 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:02:05.745 Build targets in project: 32 00:02:05.745 00:02:05.745 xnvme 0.7.3 00:02:05.745 00:02:05.745 User defined options 00:02:05.745 with-libaio : enabled 00:02:05.745 with-liburing: enabled 00:02:05.745 with-libvfn : disabled 00:02:05.745 with-spdk : false 00:02:05.745 00:02:05.745 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:05.745 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:05.745 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:02:05.745 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:02:05.745 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:02:05.745 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:02:05.745 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:02:05.745 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:02:06.003 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:02:06.003 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:02:06.003 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:02:06.003 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:02:06.003 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:02:06.003 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:02:06.003 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:02:06.003 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:02:06.003 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:02:06.003 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:02:06.003 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:02:06.003 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:02:06.003 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:02:06.003 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:02:06.003 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:02:06.003 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:02:06.003 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:02:06.003 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:02:06.003 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:02:06.003 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:02:06.262 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:02:06.262 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:02:06.262 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:02:06.262 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:02:06.262 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:02:06.262 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:02:06.262 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:02:06.262 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:02:06.262 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:02:06.262 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:02:06.262 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:02:06.262 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:02:06.262 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:02:06.262 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:02:06.262 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:02:06.262 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:02:06.262 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:02:06.262 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:02:06.262 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:02:06.262 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:02:06.262 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:02:06.262 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:02:06.262 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:02:06.262 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:02:06.262 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:02:06.262 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:02:06.262 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:02:06.262 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:02:06.262 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:02:06.262 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:02:06.262 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:02:06.262 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:02:06.262 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:02:06.520 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:02:06.520 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:02:06.520 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:02:06.520 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:02:06.520 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:02:06.520 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:02:06.520 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:02:06.520 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:02:06.520 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:02:06.520 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:02:06.520 [70/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:02:06.520 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:02:06.520 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:02:06.520 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:02:06.520 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:02:06.520 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:02:06.520 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:02:06.520 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:02:06.520 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:02:06.520 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:02:06.520 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:02:06.778 [81/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:02:06.778 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:02:06.778 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:02:06.778 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:02:06.778 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:02:06.778 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:02:06.778 [87/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:02:06.778 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:02:06.778 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:02:06.778 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:02:06.778 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:02:06.778 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:02:06.778 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:02:06.778 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:02:06.778 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:02:06.778 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:02:06.778 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:02:06.778 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:02:06.778 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:02:06.778 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:02:06.778 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:02:06.778 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:02:06.778 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:02:06.778 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:02:07.036 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:02:07.036 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:02:07.036 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:02:07.036 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:02:07.036 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:02:07.036 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:02:07.036 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:02:07.036 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:02:07.036 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:02:07.036 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:02:07.036 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:02:07.036 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:02:07.036 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:02:07.036 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:02:07.036 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:02:07.036 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:02:07.036 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:02:07.036 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:02:07.036 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:02:07.036 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:02:07.036 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:02:07.036 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:02:07.036 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:02:07.036 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:02:07.036 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:02:07.036 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:02:07.037 [131/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:02:07.037 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:02:07.037 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:02:07.037 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:02:07.295 [135/203] Linking target lib/libxnvme.so 00:02:07.295 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:02:07.295 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:02:07.295 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:02:07.295 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:02:07.295 [140/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:02:07.295 [141/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:02:07.295 [142/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:02:07.295 [143/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:02:07.295 [144/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:02:07.295 [145/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:02:07.295 [146/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:02:07.295 [147/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:02:07.295 [148/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:02:07.295 [149/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:02:07.295 [150/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:02:07.295 [151/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:02:07.295 [152/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:02:07.295 [153/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:02:07.295 [154/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:02:07.553 [155/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:02:07.553 [156/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:02:07.553 [157/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:02:07.553 [158/203] Compiling C object tools/xdd.p/xdd.c.o 00:02:07.553 [159/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:02:07.553 [160/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:02:07.553 [161/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:02:07.553 [162/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:02:07.553 [163/203] Compiling C object tools/lblk.p/lblk.c.o 00:02:07.553 [164/203] Compiling C object tools/zoned.p/zoned.c.o 00:02:07.553 [165/203] Compiling C object tools/kvs.p/kvs.c.o 00:02:07.553 [166/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:02:07.553 [167/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:02:07.553 [168/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:02:07.553 [169/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:02:07.811 [170/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:02:07.811 [171/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:02:07.811 [172/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:02:07.811 [173/203] Linking static target lib/libxnvme.a 00:02:07.811 [174/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:02:07.811 [175/203] Linking target tests/xnvme_tests_async_intf 00:02:07.811 [176/203] Linking target tests/xnvme_tests_buf 00:02:07.811 [177/203] Linking target tests/xnvme_tests_cli 00:02:07.811 [178/203] Linking target tests/xnvme_tests_enum 00:02:07.811 [179/203] Linking target tests/xnvme_tests_lblk 00:02:07.811 [180/203] Linking target tests/xnvme_tests_xnvme_file 00:02:07.811 [181/203] Linking target tests/xnvme_tests_scc 00:02:07.811 [182/203] Linking target tests/xnvme_tests_xnvme_cli 00:02:07.811 [183/203] Linking target tests/xnvme_tests_ioworker 00:02:07.811 [184/203] Linking target tests/xnvme_tests_znd_explicit_open 00:02:07.811 [185/203] Linking target tests/xnvme_tests_map 00:02:07.811 [186/203] Linking target tests/xnvme_tests_znd_zrwa 00:02:07.811 [187/203] Linking target tests/xnvme_tests_kvs 00:02:07.811 [188/203] Linking target tests/xnvme_tests_znd_append 00:02:07.811 [189/203] Linking target tools/xnvme_file 00:02:07.811 [190/203] Linking target tests/xnvme_tests_znd_state 00:02:07.811 [191/203] Linking target tools/lblk 00:02:07.811 [192/203] Linking target tools/xnvme 00:02:07.811 [193/203] Linking target examples/xnvme_enum 00:02:07.811 [194/203] Linking target examples/xnvme_io_async 00:02:07.811 [195/203] Linking target tools/xdd 00:02:07.811 [196/203] Linking target tools/zoned 00:02:07.811 [197/203] Linking target examples/xnvme_dev 00:02:07.811 [198/203] Linking target examples/xnvme_hello 00:02:07.811 [199/203] Linking target tools/kvs 00:02:07.811 [200/203] Linking target examples/xnvme_single_sync 00:02:07.811 [201/203] Linking target examples/xnvme_single_async 00:02:07.811 [202/203] Linking target examples/zoned_io_async 00:02:07.811 [203/203] Linking target examples/zoned_io_sync 00:02:07.811 INFO: autodetecting backend as ninja 00:02:07.811 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:08.070 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:14.627 The Meson build system 00:02:14.627 Version: 1.5.0 00:02:14.627 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:14.627 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:14.627 Build type: native build 00:02:14.627 Program cat found: YES (/usr/bin/cat) 00:02:14.627 Project name: DPDK 00:02:14.627 Project version: 24.03.0 00:02:14.627 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:14.627 C linker for the host machine: cc ld.bfd 2.40-14 00:02:14.627 Host machine cpu family: x86_64 00:02:14.627 Host machine cpu: x86_64 00:02:14.627 Message: ## Building in Developer Mode ## 00:02:14.627 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:14.627 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:14.627 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:14.627 Program python3 found: YES (/usr/bin/python3) 00:02:14.627 Program cat found: YES (/usr/bin/cat) 00:02:14.627 Compiler for C supports arguments -march=native: YES 00:02:14.627 Checking for size of "void *" : 8 00:02:14.627 Checking for size of "void *" : 8 (cached) 00:02:14.627 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:14.627 Library m found: YES 00:02:14.627 Library numa found: YES 00:02:14.627 Has header "numaif.h" : YES 00:02:14.627 Library fdt found: NO 00:02:14.627 Library execinfo found: NO 00:02:14.627 Has header "execinfo.h" : YES 00:02:14.627 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:14.627 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:14.627 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:14.627 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:14.627 Run-time dependency openssl found: YES 3.1.1 00:02:14.627 Run-time dependency libpcap found: YES 1.10.4 00:02:14.627 Has header "pcap.h" with dependency libpcap: YES 00:02:14.627 Compiler for C supports arguments -Wcast-qual: YES 00:02:14.627 Compiler for C supports arguments -Wdeprecated: YES 00:02:14.627 Compiler for C supports arguments -Wformat: YES 00:02:14.627 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:14.627 Compiler for C supports arguments -Wformat-security: NO 00:02:14.627 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:14.627 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:14.627 Compiler for C supports arguments -Wnested-externs: YES 00:02:14.627 Compiler for C supports arguments -Wold-style-definition: YES 00:02:14.627 Compiler for C supports arguments -Wpointer-arith: YES 00:02:14.627 Compiler for C supports arguments -Wsign-compare: YES 00:02:14.627 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:14.627 Compiler for C supports arguments -Wundef: YES 00:02:14.627 Compiler for C supports arguments -Wwrite-strings: YES 00:02:14.627 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:14.627 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:14.627 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:14.627 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:14.627 Program objdump found: YES (/usr/bin/objdump) 00:02:14.627 Compiler for C supports arguments -mavx512f: YES 00:02:14.627 Checking if "AVX512 checking" compiles: YES 00:02:14.627 Fetching value of define "__SSE4_2__" : 1 00:02:14.627 Fetching value of define "__AES__" : 1 00:02:14.627 Fetching value of define "__AVX__" : 1 00:02:14.627 Fetching value of define "__AVX2__" : 1 00:02:14.627 Fetching value of define "__AVX512BW__" : 1 00:02:14.627 Fetching value of define "__AVX512CD__" : 1 00:02:14.627 Fetching value of define "__AVX512DQ__" : 1 00:02:14.627 Fetching value of define "__AVX512F__" : 1 00:02:14.627 Fetching value of define "__AVX512VL__" : 1 00:02:14.627 Fetching value of define "__PCLMUL__" : 1 00:02:14.627 Fetching value of define "__RDRND__" : 1 00:02:14.627 Fetching value of define "__RDSEED__" : 1 00:02:14.627 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:14.627 Fetching value of define "__znver1__" : (undefined) 00:02:14.627 Fetching value of define "__znver2__" : (undefined) 00:02:14.627 Fetching value of define "__znver3__" : (undefined) 00:02:14.627 Fetching value of define "__znver4__" : (undefined) 00:02:14.627 Library asan found: YES 00:02:14.627 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:14.627 Message: lib/log: Defining dependency "log" 00:02:14.627 Message: lib/kvargs: Defining dependency "kvargs" 00:02:14.627 Message: lib/telemetry: Defining dependency "telemetry" 00:02:14.627 Library rt found: YES 00:02:14.627 Checking for function "getentropy" : NO 00:02:14.627 Message: lib/eal: Defining dependency "eal" 00:02:14.627 Message: lib/ring: Defining dependency "ring" 00:02:14.627 Message: lib/rcu: Defining dependency "rcu" 00:02:14.627 Message: lib/mempool: Defining dependency "mempool" 00:02:14.627 Message: lib/mbuf: Defining dependency "mbuf" 00:02:14.627 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:14.627 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:14.627 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:14.627 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:14.627 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:14.627 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:14.627 Compiler for C supports arguments -mpclmul: YES 00:02:14.627 Compiler for C supports arguments -maes: YES 00:02:14.627 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:14.627 Compiler for C supports arguments -mavx512bw: YES 00:02:14.627 Compiler for C supports arguments -mavx512dq: YES 00:02:14.627 Compiler for C supports arguments -mavx512vl: YES 00:02:14.627 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:14.627 Compiler for C supports arguments -mavx2: YES 00:02:14.627 Compiler for C supports arguments -mavx: YES 00:02:14.627 Message: lib/net: Defining dependency "net" 00:02:14.627 Message: lib/meter: Defining dependency "meter" 00:02:14.627 Message: lib/ethdev: Defining dependency "ethdev" 00:02:14.627 Message: lib/pci: Defining dependency "pci" 00:02:14.627 Message: lib/cmdline: Defining dependency "cmdline" 00:02:14.627 Message: lib/hash: Defining dependency "hash" 00:02:14.627 Message: lib/timer: Defining dependency "timer" 00:02:14.627 Message: lib/compressdev: Defining dependency "compressdev" 00:02:14.627 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:14.627 Message: lib/dmadev: Defining dependency "dmadev" 00:02:14.627 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:14.627 Message: lib/power: Defining dependency "power" 00:02:14.627 Message: lib/reorder: Defining dependency "reorder" 00:02:14.627 Message: lib/security: Defining dependency "security" 00:02:14.627 Has header "linux/userfaultfd.h" : YES 00:02:14.627 Has header "linux/vduse.h" : YES 00:02:14.627 Message: lib/vhost: Defining dependency "vhost" 00:02:14.627 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:14.627 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:14.627 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:14.627 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:14.627 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:14.627 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:14.627 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:14.627 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:14.627 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:14.627 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:14.627 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:14.627 Configuring doxy-api-html.conf using configuration 00:02:14.627 Configuring doxy-api-man.conf using configuration 00:02:14.627 Program mandb found: YES (/usr/bin/mandb) 00:02:14.627 Program sphinx-build found: NO 00:02:14.627 Configuring rte_build_config.h using configuration 00:02:14.627 Message: 00:02:14.627 ================= 00:02:14.627 Applications Enabled 00:02:14.627 ================= 00:02:14.627 00:02:14.627 apps: 00:02:14.627 00:02:14.627 00:02:14.627 Message: 00:02:14.627 ================= 00:02:14.627 Libraries Enabled 00:02:14.627 ================= 00:02:14.627 00:02:14.627 libs: 00:02:14.627 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:14.627 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:14.627 cryptodev, dmadev, power, reorder, security, vhost, 00:02:14.627 00:02:14.627 Message: 00:02:14.627 =============== 00:02:14.627 Drivers Enabled 00:02:14.627 =============== 00:02:14.627 00:02:14.627 common: 00:02:14.627 00:02:14.627 bus: 00:02:14.627 pci, vdev, 00:02:14.627 mempool: 00:02:14.627 ring, 00:02:14.627 dma: 00:02:14.627 00:02:14.627 net: 00:02:14.627 00:02:14.627 crypto: 00:02:14.627 00:02:14.627 compress: 00:02:14.627 00:02:14.627 vdpa: 00:02:14.627 00:02:14.627 00:02:14.627 Message: 00:02:14.627 ================= 00:02:14.627 Content Skipped 00:02:14.627 ================= 00:02:14.627 00:02:14.627 apps: 00:02:14.627 dumpcap: explicitly disabled via build config 00:02:14.627 graph: explicitly disabled via build config 00:02:14.627 pdump: explicitly disabled via build config 00:02:14.627 proc-info: explicitly disabled via build config 00:02:14.628 test-acl: explicitly disabled via build config 00:02:14.628 test-bbdev: explicitly disabled via build config 00:02:14.628 test-cmdline: explicitly disabled via build config 00:02:14.628 test-compress-perf: explicitly disabled via build config 00:02:14.628 test-crypto-perf: explicitly disabled via build config 00:02:14.628 test-dma-perf: explicitly disabled via build config 00:02:14.628 test-eventdev: explicitly disabled via build config 00:02:14.628 test-fib: explicitly disabled via build config 00:02:14.628 test-flow-perf: explicitly disabled via build config 00:02:14.628 test-gpudev: explicitly disabled via build config 00:02:14.628 test-mldev: explicitly disabled via build config 00:02:14.628 test-pipeline: explicitly disabled via build config 00:02:14.628 test-pmd: explicitly disabled via build config 00:02:14.628 test-regex: explicitly disabled via build config 00:02:14.628 test-sad: explicitly disabled via build config 00:02:14.628 test-security-perf: explicitly disabled via build config 00:02:14.628 00:02:14.628 libs: 00:02:14.628 argparse: explicitly disabled via build config 00:02:14.628 metrics: explicitly disabled via build config 00:02:14.628 acl: explicitly disabled via build config 00:02:14.628 bbdev: explicitly disabled via build config 00:02:14.628 bitratestats: explicitly disabled via build config 00:02:14.628 bpf: explicitly disabled via build config 00:02:14.628 cfgfile: explicitly disabled via build config 00:02:14.628 distributor: explicitly disabled via build config 00:02:14.628 efd: explicitly disabled via build config 00:02:14.628 eventdev: explicitly disabled via build config 00:02:14.628 dispatcher: explicitly disabled via build config 00:02:14.628 gpudev: explicitly disabled via build config 00:02:14.628 gro: explicitly disabled via build config 00:02:14.628 gso: explicitly disabled via build config 00:02:14.628 ip_frag: explicitly disabled via build config 00:02:14.628 jobstats: explicitly disabled via build config 00:02:14.628 latencystats: explicitly disabled via build config 00:02:14.628 lpm: explicitly disabled via build config 00:02:14.628 member: explicitly disabled via build config 00:02:14.628 pcapng: explicitly disabled via build config 00:02:14.628 rawdev: explicitly disabled via build config 00:02:14.628 regexdev: explicitly disabled via build config 00:02:14.628 mldev: explicitly disabled via build config 00:02:14.628 rib: explicitly disabled via build config 00:02:14.628 sched: explicitly disabled via build config 00:02:14.628 stack: explicitly disabled via build config 00:02:14.628 ipsec: explicitly disabled via build config 00:02:14.628 pdcp: explicitly disabled via build config 00:02:14.628 fib: explicitly disabled via build config 00:02:14.628 port: explicitly disabled via build config 00:02:14.628 pdump: explicitly disabled via build config 00:02:14.628 table: explicitly disabled via build config 00:02:14.628 pipeline: explicitly disabled via build config 00:02:14.628 graph: explicitly disabled via build config 00:02:14.628 node: explicitly disabled via build config 00:02:14.628 00:02:14.628 drivers: 00:02:14.628 common/cpt: not in enabled drivers build config 00:02:14.628 common/dpaax: not in enabled drivers build config 00:02:14.628 common/iavf: not in enabled drivers build config 00:02:14.628 common/idpf: not in enabled drivers build config 00:02:14.628 common/ionic: not in enabled drivers build config 00:02:14.628 common/mvep: not in enabled drivers build config 00:02:14.628 common/octeontx: not in enabled drivers build config 00:02:14.628 bus/auxiliary: not in enabled drivers build config 00:02:14.628 bus/cdx: not in enabled drivers build config 00:02:14.628 bus/dpaa: not in enabled drivers build config 00:02:14.628 bus/fslmc: not in enabled drivers build config 00:02:14.628 bus/ifpga: not in enabled drivers build config 00:02:14.628 bus/platform: not in enabled drivers build config 00:02:14.628 bus/uacce: not in enabled drivers build config 00:02:14.628 bus/vmbus: not in enabled drivers build config 00:02:14.628 common/cnxk: not in enabled drivers build config 00:02:14.628 common/mlx5: not in enabled drivers build config 00:02:14.628 common/nfp: not in enabled drivers build config 00:02:14.628 common/nitrox: not in enabled drivers build config 00:02:14.628 common/qat: not in enabled drivers build config 00:02:14.628 common/sfc_efx: not in enabled drivers build config 00:02:14.628 mempool/bucket: not in enabled drivers build config 00:02:14.628 mempool/cnxk: not in enabled drivers build config 00:02:14.628 mempool/dpaa: not in enabled drivers build config 00:02:14.628 mempool/dpaa2: not in enabled drivers build config 00:02:14.628 mempool/octeontx: not in enabled drivers build config 00:02:14.628 mempool/stack: not in enabled drivers build config 00:02:14.628 dma/cnxk: not in enabled drivers build config 00:02:14.628 dma/dpaa: not in enabled drivers build config 00:02:14.628 dma/dpaa2: not in enabled drivers build config 00:02:14.628 dma/hisilicon: not in enabled drivers build config 00:02:14.628 dma/idxd: not in enabled drivers build config 00:02:14.628 dma/ioat: not in enabled drivers build config 00:02:14.628 dma/skeleton: not in enabled drivers build config 00:02:14.628 net/af_packet: not in enabled drivers build config 00:02:14.628 net/af_xdp: not in enabled drivers build config 00:02:14.628 net/ark: not in enabled drivers build config 00:02:14.628 net/atlantic: not in enabled drivers build config 00:02:14.628 net/avp: not in enabled drivers build config 00:02:14.628 net/axgbe: not in enabled drivers build config 00:02:14.628 net/bnx2x: not in enabled drivers build config 00:02:14.628 net/bnxt: not in enabled drivers build config 00:02:14.628 net/bonding: not in enabled drivers build config 00:02:14.628 net/cnxk: not in enabled drivers build config 00:02:14.628 net/cpfl: not in enabled drivers build config 00:02:14.628 net/cxgbe: not in enabled drivers build config 00:02:14.628 net/dpaa: not in enabled drivers build config 00:02:14.628 net/dpaa2: not in enabled drivers build config 00:02:14.628 net/e1000: not in enabled drivers build config 00:02:14.628 net/ena: not in enabled drivers build config 00:02:14.628 net/enetc: not in enabled drivers build config 00:02:14.628 net/enetfec: not in enabled drivers build config 00:02:14.628 net/enic: not in enabled drivers build config 00:02:14.628 net/failsafe: not in enabled drivers build config 00:02:14.628 net/fm10k: not in enabled drivers build config 00:02:14.628 net/gve: not in enabled drivers build config 00:02:14.628 net/hinic: not in enabled drivers build config 00:02:14.628 net/hns3: not in enabled drivers build config 00:02:14.628 net/i40e: not in enabled drivers build config 00:02:14.628 net/iavf: not in enabled drivers build config 00:02:14.628 net/ice: not in enabled drivers build config 00:02:14.628 net/idpf: not in enabled drivers build config 00:02:14.628 net/igc: not in enabled drivers build config 00:02:14.628 net/ionic: not in enabled drivers build config 00:02:14.628 net/ipn3ke: not in enabled drivers build config 00:02:14.628 net/ixgbe: not in enabled drivers build config 00:02:14.628 net/mana: not in enabled drivers build config 00:02:14.628 net/memif: not in enabled drivers build config 00:02:14.628 net/mlx4: not in enabled drivers build config 00:02:14.628 net/mlx5: not in enabled drivers build config 00:02:14.628 net/mvneta: not in enabled drivers build config 00:02:14.628 net/mvpp2: not in enabled drivers build config 00:02:14.628 net/netvsc: not in enabled drivers build config 00:02:14.628 net/nfb: not in enabled drivers build config 00:02:14.628 net/nfp: not in enabled drivers build config 00:02:14.628 net/ngbe: not in enabled drivers build config 00:02:14.628 net/null: not in enabled drivers build config 00:02:14.628 net/octeontx: not in enabled drivers build config 00:02:14.628 net/octeon_ep: not in enabled drivers build config 00:02:14.628 net/pcap: not in enabled drivers build config 00:02:14.628 net/pfe: not in enabled drivers build config 00:02:14.628 net/qede: not in enabled drivers build config 00:02:14.628 net/ring: not in enabled drivers build config 00:02:14.628 net/sfc: not in enabled drivers build config 00:02:14.628 net/softnic: not in enabled drivers build config 00:02:14.628 net/tap: not in enabled drivers build config 00:02:14.628 net/thunderx: not in enabled drivers build config 00:02:14.628 net/txgbe: not in enabled drivers build config 00:02:14.628 net/vdev_netvsc: not in enabled drivers build config 00:02:14.628 net/vhost: not in enabled drivers build config 00:02:14.628 net/virtio: not in enabled drivers build config 00:02:14.628 net/vmxnet3: not in enabled drivers build config 00:02:14.628 raw/*: missing internal dependency, "rawdev" 00:02:14.628 crypto/armv8: not in enabled drivers build config 00:02:14.628 crypto/bcmfs: not in enabled drivers build config 00:02:14.628 crypto/caam_jr: not in enabled drivers build config 00:02:14.628 crypto/ccp: not in enabled drivers build config 00:02:14.628 crypto/cnxk: not in enabled drivers build config 00:02:14.628 crypto/dpaa_sec: not in enabled drivers build config 00:02:14.628 crypto/dpaa2_sec: not in enabled drivers build config 00:02:14.628 crypto/ipsec_mb: not in enabled drivers build config 00:02:14.628 crypto/mlx5: not in enabled drivers build config 00:02:14.628 crypto/mvsam: not in enabled drivers build config 00:02:14.628 crypto/nitrox: not in enabled drivers build config 00:02:14.628 crypto/null: not in enabled drivers build config 00:02:14.628 crypto/octeontx: not in enabled drivers build config 00:02:14.628 crypto/openssl: not in enabled drivers build config 00:02:14.628 crypto/scheduler: not in enabled drivers build config 00:02:14.628 crypto/uadk: not in enabled drivers build config 00:02:14.628 crypto/virtio: not in enabled drivers build config 00:02:14.628 compress/isal: not in enabled drivers build config 00:02:14.628 compress/mlx5: not in enabled drivers build config 00:02:14.628 compress/nitrox: not in enabled drivers build config 00:02:14.628 compress/octeontx: not in enabled drivers build config 00:02:14.628 compress/zlib: not in enabled drivers build config 00:02:14.628 regex/*: missing internal dependency, "regexdev" 00:02:14.628 ml/*: missing internal dependency, "mldev" 00:02:14.628 vdpa/ifc: not in enabled drivers build config 00:02:14.628 vdpa/mlx5: not in enabled drivers build config 00:02:14.628 vdpa/nfp: not in enabled drivers build config 00:02:14.628 vdpa/sfc: not in enabled drivers build config 00:02:14.628 event/*: missing internal dependency, "eventdev" 00:02:14.628 baseband/*: missing internal dependency, "bbdev" 00:02:14.628 gpu/*: missing internal dependency, "gpudev" 00:02:14.628 00:02:14.628 00:02:14.628 Build targets in project: 84 00:02:14.628 00:02:14.628 DPDK 24.03.0 00:02:14.628 00:02:14.628 User defined options 00:02:14.628 buildtype : debug 00:02:14.628 default_library : shared 00:02:14.628 libdir : lib 00:02:14.628 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:14.628 b_sanitize : address 00:02:14.628 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:14.628 c_link_args : 00:02:14.628 cpu_instruction_set: native 00:02:14.628 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:14.629 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:14.629 enable_docs : false 00:02:14.629 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:14.629 enable_kmods : false 00:02:14.629 max_lcores : 128 00:02:14.629 tests : false 00:02:14.629 00:02:14.629 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:14.629 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:14.629 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:14.629 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:14.629 [3/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:14.629 [4/267] Linking static target lib/librte_kvargs.a 00:02:14.629 [5/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:14.629 [6/267] Linking static target lib/librte_log.a 00:02:14.629 [7/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:14.629 [8/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:14.629 [9/267] Linking static target lib/librte_telemetry.a 00:02:14.629 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:14.629 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:14.629 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:14.629 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:14.629 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:14.629 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:14.913 [16/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.913 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:14.913 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:14.913 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:15.171 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:15.171 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:15.171 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:15.171 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:15.171 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:15.171 [25/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.171 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:15.171 [27/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.171 [28/267] Linking target lib/librte_log.so.24.1 00:02:15.429 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:15.429 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:15.429 [31/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:15.429 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:15.429 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:15.429 [34/267] Linking target lib/librte_kvargs.so.24.1 00:02:15.429 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:15.429 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:15.429 [37/267] Linking target lib/librte_telemetry.so.24.1 00:02:15.429 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:15.429 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:15.687 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:15.687 [41/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:15.687 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:15.687 [43/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:15.687 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:15.687 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:15.687 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:15.945 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:15.945 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:15.945 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:15.945 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:15.945 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:16.203 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:16.203 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:16.203 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:16.203 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:16.203 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:16.203 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:16.203 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:16.203 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:16.460 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:16.460 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:16.460 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:16.460 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:16.460 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:16.460 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:16.460 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:16.460 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:16.719 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:16.719 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:16.719 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:16.719 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:16.719 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:16.719 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:16.977 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:16.977 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:16.977 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:16.977 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:16.977 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:16.977 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:16.977 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:16.977 [81/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:16.977 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:17.235 [83/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:17.235 [84/267] Linking static target lib/librte_ring.a 00:02:17.235 [85/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:17.235 [86/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:17.235 [87/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:17.494 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:17.494 [89/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:17.494 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:17.494 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:17.494 [92/267] Linking static target lib/librte_eal.a 00:02:17.494 [93/267] Linking static target lib/librte_mempool.a 00:02:17.494 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:17.494 [95/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:17.494 [96/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:17.753 [97/267] Linking static target lib/librte_rcu.a 00:02:17.753 [98/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:17.753 [99/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.753 [100/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:17.753 [101/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:17.753 [102/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:18.011 [103/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:18.011 [104/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:18.011 [105/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:18.011 [106/267] Linking static target lib/librte_meter.a 00:02:18.011 [107/267] Linking static target lib/librte_mbuf.a 00:02:18.011 [108/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.011 [109/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:18.011 [110/267] Linking static target lib/librte_net.a 00:02:18.270 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:18.270 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:18.270 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:18.270 [114/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.270 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:18.528 [116/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.528 [117/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.528 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:18.528 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:18.786 [120/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.786 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:18.786 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:18.786 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:19.044 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:19.044 [125/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:19.044 [126/267] Linking static target lib/librte_pci.a 00:02:19.044 [127/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:19.044 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:19.044 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:19.044 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:19.044 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:19.303 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:19.303 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:19.303 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:19.303 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:19.303 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:19.303 [137/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.303 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:19.303 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:19.303 [140/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:19.303 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:19.303 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:19.303 [143/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:19.303 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:19.303 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:19.303 [146/267] Linking static target lib/librte_cmdline.a 00:02:19.561 [147/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:19.561 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:19.561 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:19.819 [150/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:19.819 [151/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:19.819 [152/267] Linking static target lib/librte_timer.a 00:02:19.819 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:20.077 [154/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:20.077 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:20.077 [156/267] Linking static target lib/librte_ethdev.a 00:02:20.077 [157/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:20.077 [158/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:20.077 [159/267] Linking static target lib/librte_compressdev.a 00:02:20.334 [160/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:20.334 [161/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:20.334 [162/267] Linking static target lib/librte_hash.a 00:02:20.334 [163/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:20.334 [164/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:20.334 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:20.334 [166/267] Linking static target lib/librte_dmadev.a 00:02:20.334 [167/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.593 [168/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:20.593 [169/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:20.593 [170/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:20.851 [171/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.851 [172/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:20.851 [173/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.851 [174/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:20.851 [175/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:21.109 [176/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.109 [177/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:21.109 [178/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:21.109 [179/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:21.109 [180/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:21.109 [181/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:21.109 [182/267] Linking static target lib/librte_cryptodev.a 00:02:21.109 [183/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.368 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:21.368 [185/267] Linking static target lib/librte_power.a 00:02:21.368 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:21.627 [187/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:21.627 [188/267] Linking static target lib/librte_security.a 00:02:21.627 [189/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:21.627 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:21.627 [191/267] Linking static target lib/librte_reorder.a 00:02:21.627 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:21.885 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:22.143 [194/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.143 [195/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.143 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:22.143 [197/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:22.143 [198/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.400 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:22.401 [200/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:22.401 [201/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:22.658 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:22.658 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:22.658 [204/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:22.658 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:22.915 [206/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:22.915 [207/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:22.915 [208/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:22.915 [209/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:22.915 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.173 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:23.173 [212/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:23.173 [213/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:23.173 [214/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:23.173 [215/267] Linking static target drivers/librte_bus_vdev.a 00:02:23.173 [216/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:23.173 [217/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:23.173 [218/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:23.173 [219/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:23.173 [220/267] Linking static target drivers/librte_bus_pci.a 00:02:23.173 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:23.173 [222/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:23.173 [223/267] Linking static target drivers/librte_mempool_ring.a 00:02:23.430 [224/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:23.430 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.430 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.688 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:25.063 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.063 [229/267] Linking target lib/librte_eal.so.24.1 00:02:25.063 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:25.063 [231/267] Linking target lib/librte_pci.so.24.1 00:02:25.064 [232/267] Linking target lib/librte_meter.so.24.1 00:02:25.321 [233/267] Linking target lib/librte_dmadev.so.24.1 00:02:25.321 [234/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:25.321 [235/267] Linking target lib/librte_ring.so.24.1 00:02:25.321 [236/267] Linking target lib/librte_timer.so.24.1 00:02:25.321 [237/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:25.322 [238/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:25.322 [239/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:25.322 [240/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:25.322 [241/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:25.322 [242/267] Linking target lib/librte_rcu.so.24.1 00:02:25.322 [243/267] Linking target lib/librte_mempool.so.24.1 00:02:25.322 [244/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:25.580 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:25.580 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:25.580 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:25.580 [248/267] Linking target lib/librte_mbuf.so.24.1 00:02:25.580 [249/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.580 [250/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:25.580 [251/267] Linking target lib/librte_net.so.24.1 00:02:25.580 [252/267] Linking target lib/librte_cryptodev.so.24.1 00:02:25.580 [253/267] Linking target lib/librte_reorder.so.24.1 00:02:25.580 [254/267] Linking target lib/librte_compressdev.so.24.1 00:02:25.837 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:25.837 [256/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:25.837 [257/267] Linking target lib/librte_security.so.24.1 00:02:25.837 [258/267] Linking target lib/librte_hash.so.24.1 00:02:25.837 [259/267] Linking target lib/librte_cmdline.so.24.1 00:02:25.837 [260/267] Linking target lib/librte_ethdev.so.24.1 00:02:25.837 [261/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:25.837 [262/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:26.095 [263/267] Linking target lib/librte_power.so.24.1 00:02:27.997 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:27.997 [265/267] Linking static target lib/librte_vhost.a 00:02:28.938 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.196 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:29.196 INFO: autodetecting backend as ninja 00:02:29.196 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:44.075 CC lib/ut/ut.o 00:02:44.075 CC lib/ut_mock/mock.o 00:02:44.075 CC lib/log/log_deprecated.o 00:02:44.075 CC lib/log/log_flags.o 00:02:44.075 CC lib/log/log.o 00:02:44.075 LIB libspdk_ut.a 00:02:44.075 LIB libspdk_ut_mock.a 00:02:44.075 LIB libspdk_log.a 00:02:44.075 SO libspdk_ut.so.2.0 00:02:44.075 SO libspdk_ut_mock.so.6.0 00:02:44.075 SO libspdk_log.so.7.1 00:02:44.075 SYMLINK libspdk_ut.so 00:02:44.075 SYMLINK libspdk_ut_mock.so 00:02:44.075 SYMLINK libspdk_log.so 00:02:44.075 CC lib/util/base64.o 00:02:44.075 CC lib/util/bit_array.o 00:02:44.075 CC lib/util/cpuset.o 00:02:44.075 CC lib/util/crc16.o 00:02:44.075 CC lib/util/crc32.o 00:02:44.075 CC lib/dma/dma.o 00:02:44.075 CC lib/util/crc32c.o 00:02:44.075 CXX lib/trace_parser/trace.o 00:02:44.075 CC lib/ioat/ioat.o 00:02:44.075 CC lib/vfio_user/host/vfio_user_pci.o 00:02:44.075 CC lib/vfio_user/host/vfio_user.o 00:02:44.075 CC lib/util/crc32_ieee.o 00:02:44.075 CC lib/util/crc64.o 00:02:44.075 CC lib/util/dif.o 00:02:44.075 LIB libspdk_dma.a 00:02:44.075 CC lib/util/fd.o 00:02:44.075 CC lib/util/fd_group.o 00:02:44.075 CC lib/util/file.o 00:02:44.075 SO libspdk_dma.so.5.0 00:02:44.075 SYMLINK libspdk_dma.so 00:02:44.075 CC lib/util/hexlify.o 00:02:44.075 CC lib/util/iov.o 00:02:44.075 LIB libspdk_ioat.a 00:02:44.075 CC lib/util/math.o 00:02:44.075 LIB libspdk_vfio_user.a 00:02:44.075 CC lib/util/net.o 00:02:44.075 SO libspdk_ioat.so.7.0 00:02:44.075 SO libspdk_vfio_user.so.5.0 00:02:44.075 SYMLINK libspdk_ioat.so 00:02:44.075 CC lib/util/pipe.o 00:02:44.075 SYMLINK libspdk_vfio_user.so 00:02:44.075 CC lib/util/strerror_tls.o 00:02:44.075 CC lib/util/string.o 00:02:44.075 CC lib/util/uuid.o 00:02:44.075 CC lib/util/xor.o 00:02:44.075 CC lib/util/zipf.o 00:02:44.075 CC lib/util/md5.o 00:02:44.075 LIB libspdk_util.a 00:02:44.075 SO libspdk_util.so.10.0 00:02:44.075 LIB libspdk_trace_parser.a 00:02:44.075 SO libspdk_trace_parser.so.6.0 00:02:44.075 SYMLINK libspdk_util.so 00:02:44.075 SYMLINK libspdk_trace_parser.so 00:02:44.075 CC lib/conf/conf.o 00:02:44.075 CC lib/json/json_parse.o 00:02:44.075 CC lib/env_dpdk/env.o 00:02:44.075 CC lib/json/json_util.o 00:02:44.075 CC lib/env_dpdk/memory.o 00:02:44.075 CC lib/json/json_write.o 00:02:44.075 CC lib/idxd/idxd.o 00:02:44.075 CC lib/rdma_provider/common.o 00:02:44.075 CC lib/rdma_utils/rdma_utils.o 00:02:44.075 CC lib/vmd/vmd.o 00:02:44.075 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:44.075 LIB libspdk_rdma_utils.a 00:02:44.075 LIB libspdk_conf.a 00:02:44.075 SO libspdk_rdma_utils.so.1.0 00:02:44.075 SO libspdk_conf.so.6.0 00:02:44.075 CC lib/vmd/led.o 00:02:44.075 CC lib/env_dpdk/pci.o 00:02:44.075 SYMLINK libspdk_conf.so 00:02:44.075 SYMLINK libspdk_rdma_utils.so 00:02:44.075 CC lib/env_dpdk/init.o 00:02:44.075 CC lib/env_dpdk/threads.o 00:02:44.075 LIB libspdk_json.a 00:02:44.075 SO libspdk_json.so.6.0 00:02:44.075 CC lib/idxd/idxd_user.o 00:02:44.075 LIB libspdk_rdma_provider.a 00:02:44.075 SYMLINK libspdk_json.so 00:02:44.075 CC lib/idxd/idxd_kernel.o 00:02:44.075 SO libspdk_rdma_provider.so.6.0 00:02:44.075 CC lib/env_dpdk/pci_ioat.o 00:02:44.334 SYMLINK libspdk_rdma_provider.so 00:02:44.334 CC lib/env_dpdk/pci_virtio.o 00:02:44.334 CC lib/env_dpdk/pci_vmd.o 00:02:44.334 CC lib/jsonrpc/jsonrpc_server.o 00:02:44.334 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:44.334 CC lib/jsonrpc/jsonrpc_client.o 00:02:44.334 CC lib/env_dpdk/pci_idxd.o 00:02:44.334 CC lib/env_dpdk/pci_event.o 00:02:44.334 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:44.334 LIB libspdk_idxd.a 00:02:44.334 CC lib/env_dpdk/sigbus_handler.o 00:02:44.334 LIB libspdk_vmd.a 00:02:44.595 SO libspdk_idxd.so.12.1 00:02:44.595 SO libspdk_vmd.so.6.0 00:02:44.595 CC lib/env_dpdk/pci_dpdk.o 00:02:44.595 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:44.595 SYMLINK libspdk_vmd.so 00:02:44.595 SYMLINK libspdk_idxd.so 00:02:44.595 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:44.595 LIB libspdk_jsonrpc.a 00:02:44.595 SO libspdk_jsonrpc.so.6.0 00:02:44.595 SYMLINK libspdk_jsonrpc.so 00:02:44.853 CC lib/rpc/rpc.o 00:02:45.112 LIB libspdk_env_dpdk.a 00:02:45.112 SO libspdk_env_dpdk.so.15.0 00:02:45.112 LIB libspdk_rpc.a 00:02:45.112 SO libspdk_rpc.so.6.0 00:02:45.112 SYMLINK libspdk_env_dpdk.so 00:02:45.112 SYMLINK libspdk_rpc.so 00:02:45.371 CC lib/trace/trace_flags.o 00:02:45.371 CC lib/trace/trace_rpc.o 00:02:45.371 CC lib/trace/trace.o 00:02:45.371 CC lib/notify/notify.o 00:02:45.371 CC lib/notify/notify_rpc.o 00:02:45.371 CC lib/keyring/keyring.o 00:02:45.371 CC lib/keyring/keyring_rpc.o 00:02:45.628 LIB libspdk_notify.a 00:02:45.628 SO libspdk_notify.so.6.0 00:02:45.629 LIB libspdk_keyring.a 00:02:45.629 SYMLINK libspdk_notify.so 00:02:45.629 LIB libspdk_trace.a 00:02:45.629 SO libspdk_keyring.so.2.0 00:02:45.629 SO libspdk_trace.so.11.0 00:02:45.629 SYMLINK libspdk_keyring.so 00:02:45.629 SYMLINK libspdk_trace.so 00:02:45.887 CC lib/thread/thread.o 00:02:45.887 CC lib/thread/iobuf.o 00:02:45.887 CC lib/sock/sock.o 00:02:45.887 CC lib/sock/sock_rpc.o 00:02:46.146 LIB libspdk_sock.a 00:02:46.404 SO libspdk_sock.so.10.0 00:02:46.404 SYMLINK libspdk_sock.so 00:02:46.662 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:46.662 CC lib/nvme/nvme_fabric.o 00:02:46.662 CC lib/nvme/nvme_ctrlr.o 00:02:46.662 CC lib/nvme/nvme_pcie_common.o 00:02:46.662 CC lib/nvme/nvme_qpair.o 00:02:46.662 CC lib/nvme/nvme_ns_cmd.o 00:02:46.662 CC lib/nvme/nvme.o 00:02:46.662 CC lib/nvme/nvme_ns.o 00:02:46.662 CC lib/nvme/nvme_pcie.o 00:02:47.229 CC lib/nvme/nvme_quirks.o 00:02:47.229 CC lib/nvme/nvme_transport.o 00:02:47.229 CC lib/nvme/nvme_discovery.o 00:02:47.229 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:47.229 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:47.487 CC lib/nvme/nvme_tcp.o 00:02:47.487 CC lib/nvme/nvme_opal.o 00:02:47.487 LIB libspdk_thread.a 00:02:47.487 CC lib/nvme/nvme_io_msg.o 00:02:47.487 SO libspdk_thread.so.10.2 00:02:47.487 SYMLINK libspdk_thread.so 00:02:47.487 CC lib/nvme/nvme_poll_group.o 00:02:47.745 CC lib/blob/blobstore.o 00:02:47.745 CC lib/accel/accel.o 00:02:47.745 CC lib/blob/request.o 00:02:47.745 CC lib/nvme/nvme_zns.o 00:02:48.002 CC lib/blob/zeroes.o 00:02:48.002 CC lib/blob/blob_bs_dev.o 00:02:48.002 CC lib/nvme/nvme_stubs.o 00:02:48.002 CC lib/init/json_config.o 00:02:48.003 CC lib/init/subsystem.o 00:02:48.003 CC lib/init/subsystem_rpc.o 00:02:48.261 CC lib/init/rpc.o 00:02:48.261 CC lib/nvme/nvme_auth.o 00:02:48.261 CC lib/nvme/nvme_cuse.o 00:02:48.261 CC lib/nvme/nvme_rdma.o 00:02:48.261 LIB libspdk_init.a 00:02:48.261 SO libspdk_init.so.6.0 00:02:48.261 SYMLINK libspdk_init.so 00:02:48.518 CC lib/virtio/virtio.o 00:02:48.518 CC lib/virtio/virtio_vhost_user.o 00:02:48.518 CC lib/fsdev/fsdev.o 00:02:48.776 CC lib/fsdev/fsdev_io.o 00:02:48.776 CC lib/accel/accel_rpc.o 00:02:48.776 CC lib/virtio/virtio_vfio_user.o 00:02:48.776 CC lib/virtio/virtio_pci.o 00:02:49.035 CC lib/fsdev/fsdev_rpc.o 00:02:49.035 CC lib/accel/accel_sw.o 00:02:49.035 LIB libspdk_fsdev.a 00:02:49.035 LIB libspdk_virtio.a 00:02:49.035 SO libspdk_fsdev.so.1.0 00:02:49.294 CC lib/event/app.o 00:02:49.294 CC lib/event/reactor.o 00:02:49.294 CC lib/event/app_rpc.o 00:02:49.294 CC lib/event/log_rpc.o 00:02:49.294 CC lib/event/scheduler_static.o 00:02:49.294 SO libspdk_virtio.so.7.0 00:02:49.294 SYMLINK libspdk_fsdev.so 00:02:49.294 SYMLINK libspdk_virtio.so 00:02:49.294 LIB libspdk_accel.a 00:02:49.294 SO libspdk_accel.so.16.0 00:02:49.294 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:49.294 LIB libspdk_nvme.a 00:02:49.552 SYMLINK libspdk_accel.so 00:02:49.552 SO libspdk_nvme.so.14.0 00:02:49.552 CC lib/bdev/bdev.o 00:02:49.552 CC lib/bdev/bdev_rpc.o 00:02:49.552 CC lib/bdev/bdev_zone.o 00:02:49.552 CC lib/bdev/part.o 00:02:49.552 CC lib/bdev/scsi_nvme.o 00:02:49.552 LIB libspdk_event.a 00:02:49.811 SO libspdk_event.so.15.0 00:02:49.811 SYMLINK libspdk_event.so 00:02:49.811 SYMLINK libspdk_nvme.so 00:02:50.070 LIB libspdk_fuse_dispatcher.a 00:02:50.070 SO libspdk_fuse_dispatcher.so.1.0 00:02:50.070 SYMLINK libspdk_fuse_dispatcher.so 00:02:50.328 LIB libspdk_blob.a 00:02:50.328 SO libspdk_blob.so.11.0 00:02:50.587 SYMLINK libspdk_blob.so 00:02:50.846 CC lib/blobfs/blobfs.o 00:02:50.846 CC lib/blobfs/tree.o 00:02:50.846 CC lib/lvol/lvol.o 00:02:51.782 LIB libspdk_blobfs.a 00:02:51.782 SO libspdk_blobfs.so.10.0 00:02:51.782 SYMLINK libspdk_blobfs.so 00:02:51.782 LIB libspdk_lvol.a 00:02:51.782 SO libspdk_lvol.so.10.0 00:02:51.782 SYMLINK libspdk_lvol.so 00:02:52.720 LIB libspdk_bdev.a 00:02:52.720 SO libspdk_bdev.so.17.0 00:02:52.720 SYMLINK libspdk_bdev.so 00:02:52.979 CC lib/nbd/nbd.o 00:02:52.979 CC lib/nbd/nbd_rpc.o 00:02:52.979 CC lib/nvmf/ctrlr.o 00:02:52.979 CC lib/nvmf/ctrlr_discovery.o 00:02:52.979 CC lib/nvmf/ctrlr_bdev.o 00:02:52.979 CC lib/scsi/dev.o 00:02:52.979 CC lib/scsi/lun.o 00:02:52.979 CC lib/scsi/port.o 00:02:52.979 CC lib/ublk/ublk.o 00:02:52.979 CC lib/ftl/ftl_core.o 00:02:52.979 CC lib/scsi/scsi.o 00:02:52.979 CC lib/scsi/scsi_bdev.o 00:02:52.979 CC lib/ftl/ftl_init.o 00:02:53.238 CC lib/nvmf/subsystem.o 00:02:53.238 CC lib/nvmf/nvmf.o 00:02:53.238 CC lib/nvmf/nvmf_rpc.o 00:02:53.238 LIB libspdk_nbd.a 00:02:53.238 CC lib/ftl/ftl_layout.o 00:02:53.238 SO libspdk_nbd.so.7.0 00:02:53.497 SYMLINK libspdk_nbd.so 00:02:53.497 CC lib/nvmf/transport.o 00:02:53.497 CC lib/scsi/scsi_pr.o 00:02:53.497 CC lib/nvmf/tcp.o 00:02:53.497 CC lib/ublk/ublk_rpc.o 00:02:53.497 CC lib/scsi/scsi_rpc.o 00:02:53.757 CC lib/ftl/ftl_debug.o 00:02:53.757 LIB libspdk_ublk.a 00:02:53.757 SO libspdk_ublk.so.3.0 00:02:53.757 CC lib/ftl/ftl_io.o 00:02:53.757 CC lib/scsi/task.o 00:02:53.757 SYMLINK libspdk_ublk.so 00:02:53.757 CC lib/ftl/ftl_sb.o 00:02:53.757 CC lib/nvmf/stubs.o 00:02:54.016 CC lib/nvmf/mdns_server.o 00:02:54.016 LIB libspdk_scsi.a 00:02:54.016 CC lib/nvmf/rdma.o 00:02:54.016 CC lib/ftl/ftl_l2p.o 00:02:54.016 SO libspdk_scsi.so.9.0 00:02:54.016 SYMLINK libspdk_scsi.so 00:02:54.016 CC lib/nvmf/auth.o 00:02:54.016 CC lib/ftl/ftl_l2p_flat.o 00:02:54.274 CC lib/ftl/ftl_nv_cache.o 00:02:54.274 CC lib/iscsi/conn.o 00:02:54.274 CC lib/iscsi/init_grp.o 00:02:54.274 CC lib/iscsi/iscsi.o 00:02:54.533 CC lib/iscsi/param.o 00:02:54.533 CC lib/iscsi/portal_grp.o 00:02:54.533 CC lib/vhost/vhost.o 00:02:54.533 CC lib/vhost/vhost_rpc.o 00:02:54.791 CC lib/iscsi/tgt_node.o 00:02:54.791 CC lib/iscsi/iscsi_subsystem.o 00:02:54.791 CC lib/iscsi/iscsi_rpc.o 00:02:54.791 CC lib/ftl/ftl_band.o 00:02:54.791 CC lib/ftl/ftl_band_ops.o 00:02:55.049 CC lib/iscsi/task.o 00:02:55.049 CC lib/ftl/ftl_writer.o 00:02:55.049 CC lib/vhost/vhost_scsi.o 00:02:55.049 CC lib/vhost/vhost_blk.o 00:02:55.049 CC lib/vhost/rte_vhost_user.o 00:02:55.049 CC lib/ftl/ftl_rq.o 00:02:55.308 CC lib/ftl/ftl_reloc.o 00:02:55.308 CC lib/ftl/ftl_l2p_cache.o 00:02:55.308 CC lib/ftl/ftl_p2l.o 00:02:55.308 CC lib/ftl/ftl_p2l_log.o 00:02:55.308 CC lib/ftl/mngt/ftl_mngt.o 00:02:55.567 LIB libspdk_iscsi.a 00:02:55.567 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:55.567 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:55.567 SO libspdk_iscsi.so.8.0 00:02:55.567 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:55.567 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:55.567 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:55.567 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:55.825 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:55.825 SYMLINK libspdk_iscsi.so 00:02:55.825 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:55.825 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:55.825 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:55.825 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:55.825 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:55.825 CC lib/ftl/utils/ftl_conf.o 00:02:55.825 CC lib/ftl/utils/ftl_md.o 00:02:55.825 CC lib/ftl/utils/ftl_mempool.o 00:02:55.825 CC lib/ftl/utils/ftl_bitmap.o 00:02:55.825 LIB libspdk_vhost.a 00:02:56.084 CC lib/ftl/utils/ftl_property.o 00:02:56.084 SO libspdk_vhost.so.8.0 00:02:56.084 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:56.084 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:56.084 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:56.084 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:56.084 LIB libspdk_nvmf.a 00:02:56.084 SYMLINK libspdk_vhost.so 00:02:56.084 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:56.084 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:56.084 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:56.084 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:56.084 SO libspdk_nvmf.so.19.0 00:02:56.084 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:56.084 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:56.084 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:56.342 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:56.342 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:56.342 CC lib/ftl/base/ftl_base_dev.o 00:02:56.342 CC lib/ftl/base/ftl_base_bdev.o 00:02:56.342 CC lib/ftl/ftl_trace.o 00:02:56.342 SYMLINK libspdk_nvmf.so 00:02:56.600 LIB libspdk_ftl.a 00:02:56.600 SO libspdk_ftl.so.9.0 00:02:56.859 SYMLINK libspdk_ftl.so 00:02:57.118 CC module/env_dpdk/env_dpdk_rpc.o 00:02:57.118 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:57.118 CC module/scheduler/gscheduler/gscheduler.o 00:02:57.118 CC module/fsdev/aio/fsdev_aio.o 00:02:57.118 CC module/keyring/file/keyring.o 00:02:57.118 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:57.383 CC module/keyring/linux/keyring.o 00:02:57.383 CC module/accel/error/accel_error.o 00:02:57.383 CC module/blob/bdev/blob_bdev.o 00:02:57.383 CC module/sock/posix/posix.o 00:02:57.383 LIB libspdk_env_dpdk_rpc.a 00:02:57.383 SO libspdk_env_dpdk_rpc.so.6.0 00:02:57.383 CC module/keyring/file/keyring_rpc.o 00:02:57.383 LIB libspdk_scheduler_dpdk_governor.a 00:02:57.383 SYMLINK libspdk_env_dpdk_rpc.so 00:02:57.383 CC module/accel/error/accel_error_rpc.o 00:02:57.383 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:57.383 CC module/keyring/linux/keyring_rpc.o 00:02:57.383 LIB libspdk_scheduler_gscheduler.a 00:02:57.383 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:57.383 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:57.383 CC module/fsdev/aio/linux_aio_mgr.o 00:02:57.383 SO libspdk_scheduler_gscheduler.so.4.0 00:02:57.383 LIB libspdk_scheduler_dynamic.a 00:02:57.383 LIB libspdk_keyring_file.a 00:02:57.383 SO libspdk_scheduler_dynamic.so.4.0 00:02:57.383 LIB libspdk_blob_bdev.a 00:02:57.383 SO libspdk_keyring_file.so.2.0 00:02:57.383 SYMLINK libspdk_scheduler_gscheduler.so 00:02:57.383 LIB libspdk_keyring_linux.a 00:02:57.383 SO libspdk_blob_bdev.so.11.0 00:02:57.384 LIB libspdk_accel_error.a 00:02:57.661 SYMLINK libspdk_scheduler_dynamic.so 00:02:57.661 SO libspdk_keyring_linux.so.1.0 00:02:57.661 SO libspdk_accel_error.so.2.0 00:02:57.661 SYMLINK libspdk_keyring_file.so 00:02:57.661 SYMLINK libspdk_blob_bdev.so 00:02:57.661 SYMLINK libspdk_keyring_linux.so 00:02:57.661 SYMLINK libspdk_accel_error.so 00:02:57.661 CC module/accel/ioat/accel_ioat.o 00:02:57.661 CC module/accel/dsa/accel_dsa.o 00:02:57.661 CC module/accel/iaa/accel_iaa.o 00:02:57.661 CC module/bdev/error/vbdev_error.o 00:02:57.661 CC module/bdev/delay/vbdev_delay.o 00:02:57.661 CC module/bdev/gpt/gpt.o 00:02:57.661 CC module/blobfs/bdev/blobfs_bdev.o 00:02:57.661 CC module/bdev/lvol/vbdev_lvol.o 00:02:57.919 LIB libspdk_fsdev_aio.a 00:02:57.919 SO libspdk_fsdev_aio.so.1.0 00:02:57.919 CC module/accel/ioat/accel_ioat_rpc.o 00:02:57.919 LIB libspdk_sock_posix.a 00:02:57.919 CC module/accel/iaa/accel_iaa_rpc.o 00:02:57.919 SYMLINK libspdk_fsdev_aio.so 00:02:57.919 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:57.919 CC module/bdev/gpt/vbdev_gpt.o 00:02:57.919 SO libspdk_sock_posix.so.6.0 00:02:57.919 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:57.919 CC module/bdev/error/vbdev_error_rpc.o 00:02:57.919 SYMLINK libspdk_sock_posix.so 00:02:57.919 LIB libspdk_accel_ioat.a 00:02:57.919 LIB libspdk_accel_iaa.a 00:02:57.919 CC module/accel/dsa/accel_dsa_rpc.o 00:02:57.919 SO libspdk_accel_ioat.so.6.0 00:02:57.919 SO libspdk_accel_iaa.so.3.0 00:02:58.177 LIB libspdk_bdev_error.a 00:02:58.177 LIB libspdk_bdev_delay.a 00:02:58.177 LIB libspdk_blobfs_bdev.a 00:02:58.177 SYMLINK libspdk_accel_ioat.so 00:02:58.177 SYMLINK libspdk_accel_iaa.so 00:02:58.177 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:58.177 SO libspdk_bdev_error.so.6.0 00:02:58.177 SO libspdk_bdev_delay.so.6.0 00:02:58.177 SO libspdk_blobfs_bdev.so.6.0 00:02:58.177 CC module/bdev/malloc/bdev_malloc.o 00:02:58.177 SYMLINK libspdk_bdev_error.so 00:02:58.177 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:58.177 LIB libspdk_bdev_gpt.a 00:02:58.177 LIB libspdk_accel_dsa.a 00:02:58.177 SYMLINK libspdk_bdev_delay.so 00:02:58.177 SYMLINK libspdk_blobfs_bdev.so 00:02:58.177 SO libspdk_bdev_gpt.so.6.0 00:02:58.177 CC module/bdev/null/bdev_null.o 00:02:58.177 SO libspdk_accel_dsa.so.5.0 00:02:58.177 CC module/bdev/null/bdev_null_rpc.o 00:02:58.177 SYMLINK libspdk_bdev_gpt.so 00:02:58.177 SYMLINK libspdk_accel_dsa.so 00:02:58.177 CC module/bdev/nvme/bdev_nvme.o 00:02:58.177 CC module/bdev/raid/bdev_raid.o 00:02:58.177 CC module/bdev/passthru/vbdev_passthru.o 00:02:58.435 CC module/bdev/split/vbdev_split.o 00:02:58.435 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:58.435 LIB libspdk_bdev_null.a 00:02:58.435 SO libspdk_bdev_null.so.6.0 00:02:58.435 LIB libspdk_bdev_lvol.a 00:02:58.435 LIB libspdk_bdev_malloc.a 00:02:58.435 SO libspdk_bdev_malloc.so.6.0 00:02:58.435 SO libspdk_bdev_lvol.so.6.0 00:02:58.435 SYMLINK libspdk_bdev_null.so 00:02:58.435 CC module/bdev/xnvme/bdev_xnvme.o 00:02:58.435 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:58.435 CC module/bdev/aio/bdev_aio.o 00:02:58.435 SYMLINK libspdk_bdev_lvol.so 00:02:58.435 SYMLINK libspdk_bdev_malloc.so 00:02:58.435 CC module/bdev/split/vbdev_split_rpc.o 00:02:58.694 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:58.694 LIB libspdk_bdev_passthru.a 00:02:58.694 LIB libspdk_bdev_split.a 00:02:58.694 SO libspdk_bdev_passthru.so.6.0 00:02:58.694 SO libspdk_bdev_split.so.6.0 00:02:58.694 CC module/bdev/ftl/bdev_ftl.o 00:02:58.694 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:58.694 CC module/bdev/iscsi/bdev_iscsi.o 00:02:58.694 SYMLINK libspdk_bdev_passthru.so 00:02:58.694 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:58.694 SYMLINK libspdk_bdev_split.so 00:02:58.694 CC module/bdev/aio/bdev_aio_rpc.o 00:02:58.694 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:02:58.694 LIB libspdk_bdev_zone_block.a 00:02:58.694 SO libspdk_bdev_zone_block.so.6.0 00:02:58.694 SYMLINK libspdk_bdev_zone_block.so 00:02:58.694 CC module/bdev/raid/bdev_raid_rpc.o 00:02:58.694 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:58.694 LIB libspdk_bdev_aio.a 00:02:58.694 CC module/bdev/raid/bdev_raid_sb.o 00:02:58.953 SO libspdk_bdev_aio.so.6.0 00:02:58.953 LIB libspdk_bdev_xnvme.a 00:02:58.953 LIB libspdk_bdev_ftl.a 00:02:58.953 SO libspdk_bdev_xnvme.so.3.0 00:02:58.953 SO libspdk_bdev_ftl.so.6.0 00:02:58.953 SYMLINK libspdk_bdev_aio.so 00:02:58.953 CC module/bdev/raid/raid0.o 00:02:58.953 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:58.953 SYMLINK libspdk_bdev_ftl.so 00:02:58.953 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:58.953 SYMLINK libspdk_bdev_xnvme.so 00:02:58.953 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:58.953 CC module/bdev/raid/raid1.o 00:02:58.953 LIB libspdk_bdev_iscsi.a 00:02:58.953 SO libspdk_bdev_iscsi.so.6.0 00:02:58.953 CC module/bdev/nvme/nvme_rpc.o 00:02:58.953 SYMLINK libspdk_bdev_iscsi.so 00:02:58.953 CC module/bdev/nvme/bdev_mdns_client.o 00:02:59.212 CC module/bdev/raid/concat.o 00:02:59.212 CC module/bdev/nvme/vbdev_opal.o 00:02:59.212 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:59.212 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:59.473 LIB libspdk_bdev_raid.a 00:02:59.473 SO libspdk_bdev_raid.so.6.0 00:02:59.473 LIB libspdk_bdev_virtio.a 00:02:59.473 SO libspdk_bdev_virtio.so.6.0 00:02:59.473 SYMLINK libspdk_bdev_raid.so 00:02:59.473 SYMLINK libspdk_bdev_virtio.so 00:03:00.414 LIB libspdk_bdev_nvme.a 00:03:00.414 SO libspdk_bdev_nvme.so.7.0 00:03:00.414 SYMLINK libspdk_bdev_nvme.so 00:03:00.984 CC module/event/subsystems/sock/sock.o 00:03:00.984 CC module/event/subsystems/vmd/vmd.o 00:03:00.984 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:00.984 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:00.984 CC module/event/subsystems/scheduler/scheduler.o 00:03:00.984 CC module/event/subsystems/fsdev/fsdev.o 00:03:00.984 CC module/event/subsystems/iobuf/iobuf.o 00:03:00.984 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:00.984 CC module/event/subsystems/keyring/keyring.o 00:03:00.984 LIB libspdk_event_sock.a 00:03:00.984 LIB libspdk_event_vhost_blk.a 00:03:00.984 LIB libspdk_event_keyring.a 00:03:00.984 LIB libspdk_event_scheduler.a 00:03:00.984 LIB libspdk_event_vmd.a 00:03:00.984 LIB libspdk_event_fsdev.a 00:03:00.984 LIB libspdk_event_iobuf.a 00:03:00.984 SO libspdk_event_sock.so.5.0 00:03:00.984 SO libspdk_event_keyring.so.1.0 00:03:00.984 SO libspdk_event_vhost_blk.so.3.0 00:03:00.984 SO libspdk_event_scheduler.so.4.0 00:03:00.984 SO libspdk_event_vmd.so.6.0 00:03:00.984 SO libspdk_event_fsdev.so.1.0 00:03:00.984 SO libspdk_event_iobuf.so.3.0 00:03:00.984 SYMLINK libspdk_event_sock.so 00:03:00.984 SYMLINK libspdk_event_keyring.so 00:03:00.984 SYMLINK libspdk_event_vmd.so 00:03:00.984 SYMLINK libspdk_event_scheduler.so 00:03:00.984 SYMLINK libspdk_event_vhost_blk.so 00:03:00.984 SYMLINK libspdk_event_fsdev.so 00:03:01.243 SYMLINK libspdk_event_iobuf.so 00:03:01.243 CC module/event/subsystems/accel/accel.o 00:03:01.504 LIB libspdk_event_accel.a 00:03:01.504 SO libspdk_event_accel.so.6.0 00:03:01.504 SYMLINK libspdk_event_accel.so 00:03:01.764 CC module/event/subsystems/bdev/bdev.o 00:03:02.024 LIB libspdk_event_bdev.a 00:03:02.024 SO libspdk_event_bdev.so.6.0 00:03:02.024 SYMLINK libspdk_event_bdev.so 00:03:02.024 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:02.024 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:02.285 CC module/event/subsystems/nbd/nbd.o 00:03:02.285 CC module/event/subsystems/scsi/scsi.o 00:03:02.285 CC module/event/subsystems/ublk/ublk.o 00:03:02.285 LIB libspdk_event_nbd.a 00:03:02.285 LIB libspdk_event_ublk.a 00:03:02.285 LIB libspdk_event_scsi.a 00:03:02.285 SO libspdk_event_nbd.so.6.0 00:03:02.285 LIB libspdk_event_nvmf.a 00:03:02.285 SO libspdk_event_ublk.so.3.0 00:03:02.285 SO libspdk_event_scsi.so.6.0 00:03:02.285 SYMLINK libspdk_event_nbd.so 00:03:02.285 SO libspdk_event_nvmf.so.6.0 00:03:02.285 SYMLINK libspdk_event_ublk.so 00:03:02.285 SYMLINK libspdk_event_scsi.so 00:03:02.285 SYMLINK libspdk_event_nvmf.so 00:03:02.546 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:02.546 CC module/event/subsystems/iscsi/iscsi.o 00:03:02.807 LIB libspdk_event_vhost_scsi.a 00:03:02.807 LIB libspdk_event_iscsi.a 00:03:02.807 SO libspdk_event_vhost_scsi.so.3.0 00:03:02.807 SO libspdk_event_iscsi.so.6.0 00:03:02.807 SYMLINK libspdk_event_vhost_scsi.so 00:03:02.807 SYMLINK libspdk_event_iscsi.so 00:03:02.807 SO libspdk.so.6.0 00:03:02.807 SYMLINK libspdk.so 00:03:03.068 CC app/trace_record/trace_record.o 00:03:03.068 CC app/spdk_lspci/spdk_lspci.o 00:03:03.068 CXX app/trace/trace.o 00:03:03.068 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:03.068 CC app/nvmf_tgt/nvmf_main.o 00:03:03.068 CC app/iscsi_tgt/iscsi_tgt.o 00:03:03.068 CC app/spdk_tgt/spdk_tgt.o 00:03:03.068 CC test/thread/poller_perf/poller_perf.o 00:03:03.068 CC examples/ioat/perf/perf.o 00:03:03.327 CC examples/util/zipf/zipf.o 00:03:03.327 LINK spdk_lspci 00:03:03.327 LINK zipf 00:03:03.327 LINK nvmf_tgt 00:03:03.327 LINK interrupt_tgt 00:03:03.327 LINK poller_perf 00:03:03.327 LINK spdk_trace_record 00:03:03.327 LINK iscsi_tgt 00:03:03.327 LINK spdk_tgt 00:03:03.327 LINK ioat_perf 00:03:03.327 CC app/spdk_nvme_perf/perf.o 00:03:03.586 LINK spdk_trace 00:03:03.586 CC app/spdk_nvme_discover/discovery_aer.o 00:03:03.586 CC app/spdk_nvme_identify/identify.o 00:03:03.586 CC examples/sock/hello_world/hello_sock.o 00:03:03.586 CC app/spdk_top/spdk_top.o 00:03:03.586 CC examples/ioat/verify/verify.o 00:03:03.586 CC examples/thread/thread/thread_ex.o 00:03:03.586 CC test/dma/test_dma/test_dma.o 00:03:03.586 CC app/spdk_dd/spdk_dd.o 00:03:03.847 CC examples/vmd/lsvmd/lsvmd.o 00:03:03.847 LINK spdk_nvme_discover 00:03:03.847 LINK verify 00:03:03.847 LINK hello_sock 00:03:03.847 LINK thread 00:03:03.847 LINK lsvmd 00:03:04.107 LINK spdk_dd 00:03:04.107 CC app/fio/nvme/fio_plugin.o 00:03:04.107 CC examples/idxd/perf/perf.o 00:03:04.107 CC examples/vmd/led/led.o 00:03:04.107 CC app/fio/bdev/fio_plugin.o 00:03:04.107 LINK test_dma 00:03:04.107 CC examples/nvme/hello_world/hello_world.o 00:03:04.107 LINK led 00:03:04.367 CC app/vhost/vhost.o 00:03:04.367 LINK spdk_nvme_perf 00:03:04.367 LINK idxd_perf 00:03:04.367 LINK hello_world 00:03:04.367 LINK spdk_nvme_identify 00:03:04.367 LINK vhost 00:03:04.367 CC test/app/bdev_svc/bdev_svc.o 00:03:04.663 TEST_HEADER include/spdk/accel.h 00:03:04.663 TEST_HEADER include/spdk/accel_module.h 00:03:04.663 TEST_HEADER include/spdk/assert.h 00:03:04.663 TEST_HEADER include/spdk/barrier.h 00:03:04.663 CC test/blobfs/mkfs/mkfs.o 00:03:04.663 TEST_HEADER include/spdk/base64.h 00:03:04.663 TEST_HEADER include/spdk/bdev.h 00:03:04.663 TEST_HEADER include/spdk/bdev_module.h 00:03:04.663 LINK spdk_top 00:03:04.663 TEST_HEADER include/spdk/bdev_zone.h 00:03:04.663 TEST_HEADER include/spdk/bit_array.h 00:03:04.663 TEST_HEADER include/spdk/bit_pool.h 00:03:04.663 TEST_HEADER include/spdk/blob_bdev.h 00:03:04.663 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:04.663 TEST_HEADER include/spdk/blobfs.h 00:03:04.663 TEST_HEADER include/spdk/blob.h 00:03:04.663 TEST_HEADER include/spdk/conf.h 00:03:04.663 TEST_HEADER include/spdk/config.h 00:03:04.663 TEST_HEADER include/spdk/cpuset.h 00:03:04.663 TEST_HEADER include/spdk/crc16.h 00:03:04.663 TEST_HEADER include/spdk/crc32.h 00:03:04.663 TEST_HEADER include/spdk/crc64.h 00:03:04.663 TEST_HEADER include/spdk/dif.h 00:03:04.663 TEST_HEADER include/spdk/dma.h 00:03:04.663 TEST_HEADER include/spdk/endian.h 00:03:04.663 TEST_HEADER include/spdk/env_dpdk.h 00:03:04.663 TEST_HEADER include/spdk/env.h 00:03:04.663 TEST_HEADER include/spdk/event.h 00:03:04.663 TEST_HEADER include/spdk/fd_group.h 00:03:04.663 TEST_HEADER include/spdk/fd.h 00:03:04.663 TEST_HEADER include/spdk/file.h 00:03:04.663 LINK spdk_nvme 00:03:04.663 TEST_HEADER include/spdk/fsdev.h 00:03:04.663 TEST_HEADER include/spdk/fsdev_module.h 00:03:04.663 TEST_HEADER include/spdk/ftl.h 00:03:04.663 LINK spdk_bdev 00:03:04.663 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:04.663 TEST_HEADER include/spdk/gpt_spec.h 00:03:04.663 TEST_HEADER include/spdk/hexlify.h 00:03:04.663 TEST_HEADER include/spdk/histogram_data.h 00:03:04.663 TEST_HEADER include/spdk/idxd.h 00:03:04.663 TEST_HEADER include/spdk/idxd_spec.h 00:03:04.663 TEST_HEADER include/spdk/init.h 00:03:04.663 TEST_HEADER include/spdk/ioat.h 00:03:04.663 TEST_HEADER include/spdk/ioat_spec.h 00:03:04.663 CC examples/nvme/reconnect/reconnect.o 00:03:04.663 TEST_HEADER include/spdk/iscsi_spec.h 00:03:04.663 TEST_HEADER include/spdk/json.h 00:03:04.663 TEST_HEADER include/spdk/jsonrpc.h 00:03:04.663 TEST_HEADER include/spdk/keyring.h 00:03:04.663 TEST_HEADER include/spdk/keyring_module.h 00:03:04.663 TEST_HEADER include/spdk/likely.h 00:03:04.663 TEST_HEADER include/spdk/log.h 00:03:04.663 TEST_HEADER include/spdk/lvol.h 00:03:04.663 TEST_HEADER include/spdk/md5.h 00:03:04.663 TEST_HEADER include/spdk/memory.h 00:03:04.663 TEST_HEADER include/spdk/mmio.h 00:03:04.663 TEST_HEADER include/spdk/nbd.h 00:03:04.663 TEST_HEADER include/spdk/net.h 00:03:04.663 TEST_HEADER include/spdk/notify.h 00:03:04.663 TEST_HEADER include/spdk/nvme.h 00:03:04.663 TEST_HEADER include/spdk/nvme_intel.h 00:03:04.663 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:04.663 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:04.663 TEST_HEADER include/spdk/nvme_spec.h 00:03:04.663 TEST_HEADER include/spdk/nvme_zns.h 00:03:04.663 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:04.663 LINK bdev_svc 00:03:04.663 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:04.663 TEST_HEADER include/spdk/nvmf.h 00:03:04.663 TEST_HEADER include/spdk/nvmf_spec.h 00:03:04.663 TEST_HEADER include/spdk/nvmf_transport.h 00:03:04.663 TEST_HEADER include/spdk/opal.h 00:03:04.663 TEST_HEADER include/spdk/opal_spec.h 00:03:04.663 TEST_HEADER include/spdk/pci_ids.h 00:03:04.663 TEST_HEADER include/spdk/pipe.h 00:03:04.663 TEST_HEADER include/spdk/queue.h 00:03:04.663 TEST_HEADER include/spdk/reduce.h 00:03:04.663 TEST_HEADER include/spdk/rpc.h 00:03:04.663 TEST_HEADER include/spdk/scheduler.h 00:03:04.663 CC test/env/mem_callbacks/mem_callbacks.o 00:03:04.663 TEST_HEADER include/spdk/scsi.h 00:03:04.663 TEST_HEADER include/spdk/scsi_spec.h 00:03:04.663 TEST_HEADER include/spdk/sock.h 00:03:04.663 TEST_HEADER include/spdk/stdinc.h 00:03:04.663 TEST_HEADER include/spdk/string.h 00:03:04.663 TEST_HEADER include/spdk/thread.h 00:03:04.663 TEST_HEADER include/spdk/trace.h 00:03:04.663 TEST_HEADER include/spdk/trace_parser.h 00:03:04.663 TEST_HEADER include/spdk/tree.h 00:03:04.663 TEST_HEADER include/spdk/ublk.h 00:03:04.663 TEST_HEADER include/spdk/util.h 00:03:04.663 TEST_HEADER include/spdk/uuid.h 00:03:04.663 TEST_HEADER include/spdk/version.h 00:03:04.663 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:04.663 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:04.663 TEST_HEADER include/spdk/vhost.h 00:03:04.663 TEST_HEADER include/spdk/vmd.h 00:03:04.663 TEST_HEADER include/spdk/xor.h 00:03:04.663 LINK mkfs 00:03:04.663 TEST_HEADER include/spdk/zipf.h 00:03:04.663 CXX test/cpp_headers/accel.o 00:03:04.663 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:04.663 CC examples/accel/perf/accel_perf.o 00:03:04.928 CC test/event/event_perf/event_perf.o 00:03:04.928 CC test/nvme/aer/aer.o 00:03:04.928 CXX test/cpp_headers/accel_module.o 00:03:04.928 CC test/lvol/esnap/esnap.o 00:03:04.928 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:04.928 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:04.928 LINK event_perf 00:03:04.928 LINK reconnect 00:03:04.928 CXX test/cpp_headers/assert.o 00:03:04.928 LINK hello_fsdev 00:03:05.187 LINK aer 00:03:05.187 CXX test/cpp_headers/barrier.o 00:03:05.187 CC test/event/reactor/reactor.o 00:03:05.187 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:05.187 LINK mem_callbacks 00:03:05.187 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:05.187 CC test/nvme/reset/reset.o 00:03:05.187 LINK accel_perf 00:03:05.187 LINK reactor 00:03:05.187 CXX test/cpp_headers/base64.o 00:03:05.187 LINK nvme_fuzz 00:03:05.447 CC test/env/vtophys/vtophys.o 00:03:05.447 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:05.447 CXX test/cpp_headers/bdev.o 00:03:05.447 CC test/event/reactor_perf/reactor_perf.o 00:03:05.447 LINK reset 00:03:05.447 LINK vtophys 00:03:05.447 CC test/event/app_repeat/app_repeat.o 00:03:05.447 CXX test/cpp_headers/bdev_module.o 00:03:05.447 CC test/event/scheduler/scheduler.o 00:03:05.707 LINK reactor_perf 00:03:05.707 LINK nvme_manage 00:03:05.707 LINK app_repeat 00:03:05.707 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:05.707 CXX test/cpp_headers/bdev_zone.o 00:03:05.707 CC test/nvme/sgl/sgl.o 00:03:05.707 LINK vhost_fuzz 00:03:05.707 LINK scheduler 00:03:05.707 LINK env_dpdk_post_init 00:03:05.967 CC test/rpc_client/rpc_client_test.o 00:03:05.967 CXX test/cpp_headers/bit_array.o 00:03:05.967 CC examples/nvme/arbitration/arbitration.o 00:03:05.967 CC examples/blob/hello_world/hello_blob.o 00:03:05.967 LINK sgl 00:03:05.967 CXX test/cpp_headers/bit_pool.o 00:03:05.967 LINK rpc_client_test 00:03:05.967 CC test/env/memory/memory_ut.o 00:03:05.967 CC test/accel/dif/dif.o 00:03:05.967 LINK hello_blob 00:03:06.227 CC examples/bdev/hello_world/hello_bdev.o 00:03:06.227 CXX test/cpp_headers/blob_bdev.o 00:03:06.227 CXX test/cpp_headers/blobfs_bdev.o 00:03:06.227 CC test/nvme/e2edp/nvme_dp.o 00:03:06.227 LINK arbitration 00:03:06.227 CC examples/blob/cli/blobcli.o 00:03:06.227 CXX test/cpp_headers/blobfs.o 00:03:06.227 LINK hello_bdev 00:03:06.486 CC test/env/pci/pci_ut.o 00:03:06.486 CC examples/nvme/hotplug/hotplug.o 00:03:06.486 LINK nvme_dp 00:03:06.486 CXX test/cpp_headers/blob.o 00:03:06.486 LINK dif 00:03:06.486 CXX test/cpp_headers/conf.o 00:03:06.486 CC examples/bdev/bdevperf/bdevperf.o 00:03:06.744 CC test/nvme/overhead/overhead.o 00:03:06.744 LINK hotplug 00:03:06.745 LINK iscsi_fuzz 00:03:06.745 CXX test/cpp_headers/config.o 00:03:06.745 LINK pci_ut 00:03:06.745 CXX test/cpp_headers/cpuset.o 00:03:06.745 LINK blobcli 00:03:06.745 CC test/nvme/err_injection/err_injection.o 00:03:06.745 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:07.003 LINK overhead 00:03:07.003 CXX test/cpp_headers/crc16.o 00:03:07.003 LINK err_injection 00:03:07.003 CC test/app/histogram_perf/histogram_perf.o 00:03:07.003 LINK cmb_copy 00:03:07.003 CC test/app/jsoncat/jsoncat.o 00:03:07.003 CC test/nvme/startup/startup.o 00:03:07.003 CXX test/cpp_headers/crc32.o 00:03:07.003 LINK histogram_perf 00:03:07.003 LINK memory_ut 00:03:07.003 CC examples/nvme/abort/abort.o 00:03:07.003 LINK jsoncat 00:03:07.262 CXX test/cpp_headers/crc64.o 00:03:07.262 CC test/nvme/reserve/reserve.o 00:03:07.262 CXX test/cpp_headers/dif.o 00:03:07.262 LINK startup 00:03:07.262 CC test/nvme/simple_copy/simple_copy.o 00:03:07.262 CXX test/cpp_headers/dma.o 00:03:07.262 CC test/app/stub/stub.o 00:03:07.262 LINK reserve 00:03:07.262 CC test/nvme/connect_stress/connect_stress.o 00:03:07.262 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:07.262 LINK bdevperf 00:03:07.520 CXX test/cpp_headers/endian.o 00:03:07.520 LINK simple_copy 00:03:07.520 LINK stub 00:03:07.520 CC test/bdev/bdevio/bdevio.o 00:03:07.520 CC test/nvme/boot_partition/boot_partition.o 00:03:07.520 LINK pmr_persistence 00:03:07.520 LINK abort 00:03:07.520 CXX test/cpp_headers/env_dpdk.o 00:03:07.520 CXX test/cpp_headers/env.o 00:03:07.520 LINK connect_stress 00:03:07.520 CXX test/cpp_headers/event.o 00:03:07.520 CC test/nvme/compliance/nvme_compliance.o 00:03:07.520 LINK boot_partition 00:03:07.778 CC test/nvme/fused_ordering/fused_ordering.o 00:03:07.778 CXX test/cpp_headers/fd_group.o 00:03:07.778 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:07.778 CXX test/cpp_headers/fd.o 00:03:07.778 CC test/nvme/fdp/fdp.o 00:03:07.778 CC test/nvme/cuse/cuse.o 00:03:07.778 CC examples/nvmf/nvmf/nvmf.o 00:03:07.778 LINK bdevio 00:03:07.778 LINK fused_ordering 00:03:07.778 CXX test/cpp_headers/file.o 00:03:07.778 CXX test/cpp_headers/fsdev.o 00:03:07.778 LINK nvme_compliance 00:03:07.778 LINK doorbell_aers 00:03:08.035 CXX test/cpp_headers/fsdev_module.o 00:03:08.035 CXX test/cpp_headers/ftl.o 00:03:08.035 CXX test/cpp_headers/fuse_dispatcher.o 00:03:08.035 CXX test/cpp_headers/gpt_spec.o 00:03:08.035 CXX test/cpp_headers/hexlify.o 00:03:08.035 CXX test/cpp_headers/histogram_data.o 00:03:08.035 LINK nvmf 00:03:08.035 CXX test/cpp_headers/idxd.o 00:03:08.035 CXX test/cpp_headers/idxd_spec.o 00:03:08.035 LINK fdp 00:03:08.035 CXX test/cpp_headers/init.o 00:03:08.035 CXX test/cpp_headers/ioat.o 00:03:08.035 CXX test/cpp_headers/ioat_spec.o 00:03:08.035 CXX test/cpp_headers/iscsi_spec.o 00:03:08.035 CXX test/cpp_headers/json.o 00:03:08.301 CXX test/cpp_headers/jsonrpc.o 00:03:08.301 CXX test/cpp_headers/keyring.o 00:03:08.301 CXX test/cpp_headers/keyring_module.o 00:03:08.301 CXX test/cpp_headers/likely.o 00:03:08.301 CXX test/cpp_headers/log.o 00:03:08.301 CXX test/cpp_headers/lvol.o 00:03:08.301 CXX test/cpp_headers/md5.o 00:03:08.301 CXX test/cpp_headers/memory.o 00:03:08.301 CXX test/cpp_headers/mmio.o 00:03:08.301 CXX test/cpp_headers/nbd.o 00:03:08.301 CXX test/cpp_headers/net.o 00:03:08.301 CXX test/cpp_headers/notify.o 00:03:08.301 CXX test/cpp_headers/nvme.o 00:03:08.301 CXX test/cpp_headers/nvme_intel.o 00:03:08.301 CXX test/cpp_headers/nvme_ocssd.o 00:03:08.301 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:08.301 CXX test/cpp_headers/nvme_spec.o 00:03:08.301 CXX test/cpp_headers/nvme_zns.o 00:03:08.567 CXX test/cpp_headers/nvmf_cmd.o 00:03:08.567 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:08.567 CXX test/cpp_headers/nvmf.o 00:03:08.567 CXX test/cpp_headers/nvmf_spec.o 00:03:08.567 CXX test/cpp_headers/nvmf_transport.o 00:03:08.567 CXX test/cpp_headers/opal.o 00:03:08.567 CXX test/cpp_headers/opal_spec.o 00:03:08.567 CXX test/cpp_headers/pci_ids.o 00:03:08.567 CXX test/cpp_headers/pipe.o 00:03:08.567 CXX test/cpp_headers/queue.o 00:03:08.567 CXX test/cpp_headers/reduce.o 00:03:08.567 CXX test/cpp_headers/rpc.o 00:03:08.567 CXX test/cpp_headers/scheduler.o 00:03:08.567 CXX test/cpp_headers/scsi.o 00:03:08.567 CXX test/cpp_headers/scsi_spec.o 00:03:08.567 CXX test/cpp_headers/sock.o 00:03:08.567 CXX test/cpp_headers/stdinc.o 00:03:08.567 CXX test/cpp_headers/string.o 00:03:08.567 CXX test/cpp_headers/thread.o 00:03:08.824 CXX test/cpp_headers/trace.o 00:03:08.824 CXX test/cpp_headers/trace_parser.o 00:03:08.824 CXX test/cpp_headers/tree.o 00:03:08.824 CXX test/cpp_headers/ublk.o 00:03:08.824 CXX test/cpp_headers/util.o 00:03:08.824 CXX test/cpp_headers/uuid.o 00:03:08.824 CXX test/cpp_headers/version.o 00:03:08.824 CXX test/cpp_headers/vfio_user_pci.o 00:03:08.824 CXX test/cpp_headers/vfio_user_spec.o 00:03:08.824 CXX test/cpp_headers/vhost.o 00:03:08.824 CXX test/cpp_headers/vmd.o 00:03:08.824 LINK cuse 00:03:08.824 CXX test/cpp_headers/xor.o 00:03:08.824 CXX test/cpp_headers/zipf.o 00:03:10.738 LINK esnap 00:03:10.738 00:03:10.738 real 1m8.006s 00:03:10.738 user 6m18.255s 00:03:10.738 sys 1m11.765s 00:03:10.738 17:33:00 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:10.738 17:33:00 make -- common/autotest_common.sh@10 -- $ set +x 00:03:10.738 ************************************ 00:03:10.738 END TEST make 00:03:10.738 ************************************ 00:03:10.738 17:33:00 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:10.738 17:33:00 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:10.738 17:33:00 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:10.738 17:33:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.738 17:33:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:10.738 17:33:00 -- pm/common@44 -- $ pid=5060 00:03:10.738 17:33:00 -- pm/common@50 -- $ kill -TERM 5060 00:03:10.738 17:33:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.738 17:33:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:10.738 17:33:00 -- pm/common@44 -- $ pid=5061 00:03:10.738 17:33:00 -- pm/common@50 -- $ kill -TERM 5061 00:03:10.999 17:33:00 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:10.999 17:33:00 -- common/autotest_common.sh@1691 -- # lcov --version 00:03:10.999 17:33:00 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:10.999 17:33:00 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:10.999 17:33:00 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:10.999 17:33:00 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:10.999 17:33:00 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:10.999 17:33:00 -- scripts/common.sh@336 -- # IFS=.-: 00:03:10.999 17:33:00 -- scripts/common.sh@336 -- # read -ra ver1 00:03:10.999 17:33:00 -- scripts/common.sh@337 -- # IFS=.-: 00:03:10.999 17:33:00 -- scripts/common.sh@337 -- # read -ra ver2 00:03:10.999 17:33:00 -- scripts/common.sh@338 -- # local 'op=<' 00:03:11.000 17:33:00 -- scripts/common.sh@340 -- # ver1_l=2 00:03:11.000 17:33:00 -- scripts/common.sh@341 -- # ver2_l=1 00:03:11.000 17:33:00 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:11.000 17:33:00 -- scripts/common.sh@344 -- # case "$op" in 00:03:11.000 17:33:00 -- scripts/common.sh@345 -- # : 1 00:03:11.000 17:33:00 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:11.000 17:33:00 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:11.000 17:33:00 -- scripts/common.sh@365 -- # decimal 1 00:03:11.000 17:33:00 -- scripts/common.sh@353 -- # local d=1 00:03:11.000 17:33:00 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:11.000 17:33:00 -- scripts/common.sh@355 -- # echo 1 00:03:11.000 17:33:00 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:11.000 17:33:00 -- scripts/common.sh@366 -- # decimal 2 00:03:11.000 17:33:00 -- scripts/common.sh@353 -- # local d=2 00:03:11.000 17:33:00 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:11.000 17:33:00 -- scripts/common.sh@355 -- # echo 2 00:03:11.000 17:33:00 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:11.000 17:33:00 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:11.000 17:33:00 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:11.000 17:33:00 -- scripts/common.sh@368 -- # return 0 00:03:11.000 17:33:00 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:11.000 17:33:00 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:11.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.000 --rc genhtml_branch_coverage=1 00:03:11.000 --rc genhtml_function_coverage=1 00:03:11.000 --rc genhtml_legend=1 00:03:11.000 --rc geninfo_all_blocks=1 00:03:11.000 --rc geninfo_unexecuted_blocks=1 00:03:11.000 00:03:11.000 ' 00:03:11.000 17:33:00 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:11.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.000 --rc genhtml_branch_coverage=1 00:03:11.000 --rc genhtml_function_coverage=1 00:03:11.000 --rc genhtml_legend=1 00:03:11.000 --rc geninfo_all_blocks=1 00:03:11.000 --rc geninfo_unexecuted_blocks=1 00:03:11.000 00:03:11.000 ' 00:03:11.000 17:33:00 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:11.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.000 --rc genhtml_branch_coverage=1 00:03:11.000 --rc genhtml_function_coverage=1 00:03:11.000 --rc genhtml_legend=1 00:03:11.000 --rc geninfo_all_blocks=1 00:03:11.000 --rc geninfo_unexecuted_blocks=1 00:03:11.000 00:03:11.000 ' 00:03:11.000 17:33:00 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:11.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.000 --rc genhtml_branch_coverage=1 00:03:11.000 --rc genhtml_function_coverage=1 00:03:11.000 --rc genhtml_legend=1 00:03:11.000 --rc geninfo_all_blocks=1 00:03:11.000 --rc geninfo_unexecuted_blocks=1 00:03:11.000 00:03:11.000 ' 00:03:11.000 17:33:00 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:11.000 17:33:00 -- nvmf/common.sh@7 -- # uname -s 00:03:11.000 17:33:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:11.000 17:33:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:11.000 17:33:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:11.000 17:33:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:11.000 17:33:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:11.000 17:33:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:11.000 17:33:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:11.000 17:33:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:11.000 17:33:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:11.000 17:33:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:11.000 17:33:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a1eab665-c6b7-4d65-afbe-25a7f69307b9 00:03:11.000 17:33:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=a1eab665-c6b7-4d65-afbe-25a7f69307b9 00:03:11.000 17:33:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:11.000 17:33:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:11.000 17:33:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:11.000 17:33:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:11.000 17:33:00 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:11.000 17:33:00 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:11.000 17:33:00 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:11.000 17:33:00 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:11.000 17:33:00 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:11.000 17:33:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.000 17:33:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.000 17:33:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.000 17:33:00 -- paths/export.sh@5 -- # export PATH 00:03:11.000 17:33:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.000 17:33:00 -- nvmf/common.sh@51 -- # : 0 00:03:11.000 17:33:00 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:11.000 17:33:00 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:11.000 17:33:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:11.000 17:33:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:11.000 17:33:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:11.000 17:33:00 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:11.000 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:11.000 17:33:00 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:11.000 17:33:00 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:11.000 17:33:00 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:11.000 17:33:00 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:11.000 17:33:00 -- spdk/autotest.sh@32 -- # uname -s 00:03:11.000 17:33:00 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:11.000 17:33:00 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:11.000 17:33:00 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:11.000 17:33:00 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:11.000 17:33:00 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:11.000 17:33:00 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:11.000 17:33:00 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:11.000 17:33:00 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:11.000 17:33:00 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:11.000 17:33:00 -- spdk/autotest.sh@48 -- # udevadm_pid=54648 00:03:11.000 17:33:00 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:11.000 17:33:00 -- pm/common@17 -- # local monitor 00:03:11.000 17:33:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.000 17:33:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.000 17:33:00 -- pm/common@25 -- # sleep 1 00:03:11.000 17:33:00 -- pm/common@21 -- # date +%s 00:03:11.000 17:33:00 -- pm/common@21 -- # date +%s 00:03:11.000 17:33:00 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728840780 00:03:11.000 17:33:00 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728840780 00:03:11.000 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728840780_collect-cpu-load.pm.log 00:03:11.000 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728840780_collect-vmstat.pm.log 00:03:11.944 17:33:01 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:11.944 17:33:01 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:11.944 17:33:01 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:11.944 17:33:01 -- common/autotest_common.sh@10 -- # set +x 00:03:11.944 17:33:01 -- spdk/autotest.sh@59 -- # create_test_list 00:03:11.944 17:33:01 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:11.944 17:33:01 -- common/autotest_common.sh@10 -- # set +x 00:03:12.203 17:33:01 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:12.203 17:33:01 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:12.203 17:33:01 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:12.203 17:33:01 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:12.203 17:33:01 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:12.203 17:33:01 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:12.203 17:33:01 -- common/autotest_common.sh@1455 -- # uname 00:03:12.203 17:33:01 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:12.203 17:33:01 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:12.203 17:33:01 -- common/autotest_common.sh@1475 -- # uname 00:03:12.203 17:33:01 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:12.203 17:33:01 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:12.203 17:33:01 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:12.203 lcov: LCOV version 1.15 00:03:12.203 17:33:01 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:27.138 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:27.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:42.074 17:33:30 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:42.074 17:33:30 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:42.074 17:33:30 -- common/autotest_common.sh@10 -- # set +x 00:03:42.074 17:33:30 -- spdk/autotest.sh@78 -- # rm -f 00:03:42.074 17:33:30 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:42.074 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:42.074 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:42.074 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:42.074 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:03:42.074 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:03:42.074 17:33:31 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:42.074 17:33:31 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:42.074 17:33:31 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:42.074 17:33:31 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:42.074 17:33:31 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:42.074 17:33:31 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:42.074 17:33:31 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:42.074 17:33:31 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:42.074 17:33:31 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:42.074 17:33:31 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:42.074 17:33:31 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:03:42.074 17:33:31 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:03:42.074 17:33:31 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:42.074 17:33:31 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:42.074 17:33:31 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:42.074 17:33:31 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:03:42.074 17:33:31 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:03:42.074 17:33:31 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:42.074 17:33:31 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:42.074 17:33:31 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:42.074 17:33:31 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:03:42.074 17:33:31 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:03:42.074 17:33:31 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:42.074 17:33:31 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:42.074 17:33:31 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:42.074 17:33:31 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2c2n1 00:03:42.074 17:33:31 -- common/autotest_common.sh@1648 -- # local device=nvme2c2n1 00:03:42.074 17:33:31 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:03:42.074 17:33:31 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:42.074 17:33:31 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:42.074 17:33:31 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:03:42.074 17:33:31 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:03:42.074 17:33:31 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:42.074 17:33:31 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:42.074 17:33:31 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:42.074 17:33:31 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:03:42.074 17:33:31 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:03:42.074 17:33:31 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:42.074 17:33:31 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:42.074 17:33:31 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:42.074 17:33:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:42.074 17:33:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:42.074 17:33:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:42.074 17:33:31 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:42.074 17:33:31 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:42.074 No valid GPT data, bailing 00:03:42.074 17:33:31 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:42.074 17:33:31 -- scripts/common.sh@394 -- # pt= 00:03:42.074 17:33:31 -- scripts/common.sh@395 -- # return 1 00:03:42.074 17:33:31 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:42.074 1+0 records in 00:03:42.074 1+0 records out 00:03:42.074 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152391 s, 68.8 MB/s 00:03:42.074 17:33:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:42.074 17:33:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:42.074 17:33:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:42.074 17:33:31 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:42.074 17:33:31 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:42.074 No valid GPT data, bailing 00:03:42.074 17:33:31 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:42.074 17:33:31 -- scripts/common.sh@394 -- # pt= 00:03:42.074 17:33:31 -- scripts/common.sh@395 -- # return 1 00:03:42.074 17:33:31 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:42.074 1+0 records in 00:03:42.074 1+0 records out 00:03:42.074 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.003631 s, 289 MB/s 00:03:42.074 17:33:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:42.074 17:33:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:42.074 17:33:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:03:42.074 17:33:31 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:03:42.074 17:33:31 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:42.074 No valid GPT data, bailing 00:03:42.074 17:33:31 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:42.074 17:33:31 -- scripts/common.sh@394 -- # pt= 00:03:42.074 17:33:31 -- scripts/common.sh@395 -- # return 1 00:03:42.074 17:33:31 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:42.074 1+0 records in 00:03:42.074 1+0 records out 00:03:42.074 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00586905 s, 179 MB/s 00:03:42.074 17:33:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:42.074 17:33:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:42.074 17:33:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:03:42.074 17:33:31 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:03:42.074 17:33:31 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:42.074 No valid GPT data, bailing 00:03:42.074 17:33:31 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:42.074 17:33:31 -- scripts/common.sh@394 -- # pt= 00:03:42.074 17:33:31 -- scripts/common.sh@395 -- # return 1 00:03:42.074 17:33:31 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:42.074 1+0 records in 00:03:42.074 1+0 records out 00:03:42.074 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00539429 s, 194 MB/s 00:03:42.074 17:33:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:42.074 17:33:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:42.074 17:33:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:03:42.074 17:33:31 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:03:42.074 17:33:31 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:42.074 No valid GPT data, bailing 00:03:42.074 17:33:31 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:42.075 17:33:31 -- scripts/common.sh@394 -- # pt= 00:03:42.075 17:33:31 -- scripts/common.sh@395 -- # return 1 00:03:42.075 17:33:31 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:42.075 1+0 records in 00:03:42.075 1+0 records out 00:03:42.075 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00496906 s, 211 MB/s 00:03:42.075 17:33:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:42.075 17:33:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:42.075 17:33:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:03:42.075 17:33:31 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:03:42.075 17:33:31 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:42.075 No valid GPT data, bailing 00:03:42.075 17:33:31 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:42.335 17:33:31 -- scripts/common.sh@394 -- # pt= 00:03:42.335 17:33:31 -- scripts/common.sh@395 -- # return 1 00:03:42.335 17:33:31 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:42.335 1+0 records in 00:03:42.335 1+0 records out 00:03:42.335 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00548463 s, 191 MB/s 00:03:42.335 17:33:31 -- spdk/autotest.sh@105 -- # sync 00:03:42.596 17:33:32 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:42.596 17:33:32 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:42.596 17:33:32 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:44.509 17:33:34 -- spdk/autotest.sh@111 -- # uname -s 00:03:44.509 17:33:34 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:44.509 17:33:34 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:44.509 17:33:34 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:44.770 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:45.343 Hugepages 00:03:45.343 node hugesize free / total 00:03:45.343 node0 1048576kB 0 / 0 00:03:45.343 node0 2048kB 0 / 0 00:03:45.343 00:03:45.343 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:45.343 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:45.603 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:45.603 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:03:45.603 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:45.603 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:03:45.603 17:33:35 -- spdk/autotest.sh@117 -- # uname -s 00:03:45.603 17:33:35 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:45.603 17:33:35 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:45.603 17:33:35 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:46.176 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:46.749 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:46.749 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:47.010 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:03:47.010 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:03:47.010 17:33:36 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:47.953 17:33:37 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:47.953 17:33:37 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:47.953 17:33:37 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:47.953 17:33:37 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:47.953 17:33:37 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:47.953 17:33:37 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:47.953 17:33:37 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:47.953 17:33:37 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:47.953 17:33:37 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:47.953 17:33:37 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:03:47.953 17:33:37 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:03:47.953 17:33:37 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:48.525 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:48.525 Waiting for block devices as requested 00:03:48.525 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:03:48.785 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:03:48.785 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:03:48.785 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:03:54.073 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:03:54.073 17:33:43 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:54.073 17:33:43 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:03:54.073 17:33:43 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:54.073 17:33:43 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:03:54.073 17:33:43 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:54.073 17:33:43 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:03:54.073 17:33:43 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:54.073 17:33:43 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:03:54.073 17:33:43 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:03:54.073 17:33:43 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:03:54.073 17:33:43 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:54.073 17:33:43 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:03:54.073 17:33:43 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:54.073 17:33:43 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:03:54.073 17:33:43 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:54.073 17:33:43 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:54.073 17:33:43 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:03:54.073 17:33:43 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:54.073 17:33:43 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:54.073 17:33:43 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:54.073 17:33:43 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:54.073 17:33:43 -- common/autotest_common.sh@1541 -- # continue 00:03:54.073 17:33:43 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:54.073 17:33:43 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:03:54.073 17:33:43 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:54.073 17:33:43 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:03:54.073 17:33:43 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:54.073 17:33:43 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:03:54.073 17:33:43 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:54.073 17:33:43 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:54.073 17:33:43 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:54.073 17:33:43 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:54.073 17:33:43 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:54.073 17:33:43 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:54.073 17:33:43 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:54.073 17:33:43 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:03:54.073 17:33:43 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:54.073 17:33:43 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:54.073 17:33:43 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:54.073 17:33:43 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:54.073 17:33:43 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:54.073 17:33:43 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:54.073 17:33:43 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:54.073 17:33:43 -- common/autotest_common.sh@1541 -- # continue 00:03:54.073 17:33:43 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:54.073 17:33:43 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:03:54.073 17:33:43 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:54.073 17:33:43 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:03:54.073 17:33:43 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:03:54.073 17:33:43 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:03:54.073 17:33:43 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:03:54.073 17:33:43 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:03:54.073 17:33:43 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:03:54.073 17:33:43 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:03:54.073 17:33:43 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:03:54.073 17:33:43 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:54.073 17:33:43 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:54.073 17:33:43 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:03:54.073 17:33:43 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:54.073 17:33:43 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:54.073 17:33:43 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:03:54.073 17:33:43 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:54.073 17:33:43 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:54.073 17:33:43 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:54.073 17:33:43 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:54.073 17:33:43 -- common/autotest_common.sh@1541 -- # continue 00:03:54.073 17:33:43 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:54.073 17:33:43 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:03:54.073 17:33:43 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:54.073 17:33:43 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:03:54.073 17:33:43 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:03:54.073 17:33:43 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:03:54.073 17:33:43 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:03:54.073 17:33:43 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:03:54.073 17:33:43 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:03:54.073 17:33:43 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:03:54.073 17:33:43 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:54.073 17:33:43 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:03:54.073 17:33:43 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:54.073 17:33:43 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:03:54.073 17:33:43 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:54.073 17:33:43 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:54.073 17:33:43 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:03:54.073 17:33:43 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:54.073 17:33:43 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:54.073 17:33:43 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:54.073 17:33:43 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:54.073 17:33:43 -- common/autotest_common.sh@1541 -- # continue 00:03:54.073 17:33:43 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:54.073 17:33:43 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:54.073 17:33:43 -- common/autotest_common.sh@10 -- # set +x 00:03:54.073 17:33:43 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:54.074 17:33:43 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:54.074 17:33:43 -- common/autotest_common.sh@10 -- # set +x 00:03:54.074 17:33:43 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:54.646 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:55.285 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:55.285 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:55.285 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:03:55.285 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:03:55.285 17:33:45 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:55.285 17:33:45 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:55.285 17:33:45 -- common/autotest_common.sh@10 -- # set +x 00:03:55.285 17:33:45 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:55.285 17:33:45 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:55.285 17:33:45 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:55.285 17:33:45 -- common/autotest_common.sh@1561 -- # bdfs=() 00:03:55.285 17:33:45 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:03:55.285 17:33:45 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:03:55.285 17:33:45 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:03:55.285 17:33:45 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:55.285 17:33:45 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:55.285 17:33:45 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:55.285 17:33:45 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:55.285 17:33:45 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:55.285 17:33:45 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:55.544 17:33:45 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:03:55.544 17:33:45 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:03:55.544 17:33:45 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:55.544 17:33:45 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:03:55.544 17:33:45 -- common/autotest_common.sh@1564 -- # device=0x0010 00:03:55.544 17:33:45 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:55.544 17:33:45 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:55.544 17:33:45 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:03:55.544 17:33:45 -- common/autotest_common.sh@1564 -- # device=0x0010 00:03:55.544 17:33:45 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:55.544 17:33:45 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:55.544 17:33:45 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:03:55.544 17:33:45 -- common/autotest_common.sh@1564 -- # device=0x0010 00:03:55.544 17:33:45 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:55.544 17:33:45 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:55.544 17:33:45 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:03:55.544 17:33:45 -- common/autotest_common.sh@1564 -- # device=0x0010 00:03:55.544 17:33:45 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:55.544 17:33:45 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:03:55.544 17:33:45 -- common/autotest_common.sh@1570 -- # return 0 00:03:55.544 17:33:45 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:03:55.544 17:33:45 -- common/autotest_common.sh@1578 -- # return 0 00:03:55.544 17:33:45 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:55.544 17:33:45 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:55.544 17:33:45 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:55.544 17:33:45 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:55.544 17:33:45 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:55.544 17:33:45 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:55.544 17:33:45 -- common/autotest_common.sh@10 -- # set +x 00:03:55.544 17:33:45 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:55.544 17:33:45 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:55.544 17:33:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:55.544 17:33:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:55.544 17:33:45 -- common/autotest_common.sh@10 -- # set +x 00:03:55.544 ************************************ 00:03:55.544 START TEST env 00:03:55.544 ************************************ 00:03:55.544 17:33:45 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:55.544 * Looking for test storage... 00:03:55.544 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:55.544 17:33:45 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:55.544 17:33:45 env -- common/autotest_common.sh@1691 -- # lcov --version 00:03:55.544 17:33:45 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:55.544 17:33:45 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:55.544 17:33:45 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:55.544 17:33:45 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:55.544 17:33:45 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:55.544 17:33:45 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:55.544 17:33:45 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:55.544 17:33:45 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:55.544 17:33:45 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:55.544 17:33:45 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:55.544 17:33:45 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:55.544 17:33:45 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:55.544 17:33:45 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:55.544 17:33:45 env -- scripts/common.sh@344 -- # case "$op" in 00:03:55.544 17:33:45 env -- scripts/common.sh@345 -- # : 1 00:03:55.544 17:33:45 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:55.544 17:33:45 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:55.544 17:33:45 env -- scripts/common.sh@365 -- # decimal 1 00:03:55.544 17:33:45 env -- scripts/common.sh@353 -- # local d=1 00:03:55.544 17:33:45 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:55.544 17:33:45 env -- scripts/common.sh@355 -- # echo 1 00:03:55.544 17:33:45 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:55.544 17:33:45 env -- scripts/common.sh@366 -- # decimal 2 00:03:55.544 17:33:45 env -- scripts/common.sh@353 -- # local d=2 00:03:55.544 17:33:45 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:55.544 17:33:45 env -- scripts/common.sh@355 -- # echo 2 00:03:55.544 17:33:45 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:55.544 17:33:45 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:55.544 17:33:45 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:55.544 17:33:45 env -- scripts/common.sh@368 -- # return 0 00:03:55.544 17:33:45 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:55.544 17:33:45 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:55.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.544 --rc genhtml_branch_coverage=1 00:03:55.544 --rc genhtml_function_coverage=1 00:03:55.544 --rc genhtml_legend=1 00:03:55.544 --rc geninfo_all_blocks=1 00:03:55.544 --rc geninfo_unexecuted_blocks=1 00:03:55.544 00:03:55.544 ' 00:03:55.544 17:33:45 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:55.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.544 --rc genhtml_branch_coverage=1 00:03:55.544 --rc genhtml_function_coverage=1 00:03:55.544 --rc genhtml_legend=1 00:03:55.544 --rc geninfo_all_blocks=1 00:03:55.544 --rc geninfo_unexecuted_blocks=1 00:03:55.544 00:03:55.544 ' 00:03:55.544 17:33:45 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:55.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.544 --rc genhtml_branch_coverage=1 00:03:55.544 --rc genhtml_function_coverage=1 00:03:55.544 --rc genhtml_legend=1 00:03:55.544 --rc geninfo_all_blocks=1 00:03:55.544 --rc geninfo_unexecuted_blocks=1 00:03:55.544 00:03:55.544 ' 00:03:55.544 17:33:45 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:55.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.544 --rc genhtml_branch_coverage=1 00:03:55.544 --rc genhtml_function_coverage=1 00:03:55.544 --rc genhtml_legend=1 00:03:55.544 --rc geninfo_all_blocks=1 00:03:55.544 --rc geninfo_unexecuted_blocks=1 00:03:55.544 00:03:55.544 ' 00:03:55.544 17:33:45 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:55.544 17:33:45 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:55.544 17:33:45 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:55.544 17:33:45 env -- common/autotest_common.sh@10 -- # set +x 00:03:55.544 ************************************ 00:03:55.544 START TEST env_memory 00:03:55.544 ************************************ 00:03:55.544 17:33:45 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:55.544 00:03:55.544 00:03:55.544 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.544 http://cunit.sourceforge.net/ 00:03:55.544 00:03:55.544 00:03:55.544 Suite: memory 00:03:55.803 Test: alloc and free memory map ...[2024-10-13 17:33:45.362259] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:55.803 passed 00:03:55.803 Test: mem map translation ...[2024-10-13 17:33:45.401580] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:55.803 [2024-10-13 17:33:45.401693] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:55.803 [2024-10-13 17:33:45.401794] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:55.803 [2024-10-13 17:33:45.401848] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:55.803 passed 00:03:55.803 Test: mem map registration ...[2024-10-13 17:33:45.469783] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:55.803 [2024-10-13 17:33:45.469872] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:55.803 passed 00:03:55.803 Test: mem map adjacent registrations ...passed 00:03:55.803 00:03:55.803 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.803 suites 1 1 n/a 0 0 00:03:55.803 tests 4 4 4 0 0 00:03:55.803 asserts 152 152 152 0 n/a 00:03:55.803 00:03:55.803 Elapsed time = 0.233 seconds 00:03:55.803 00:03:55.803 real 0m0.268s 00:03:55.803 user 0m0.242s 00:03:55.803 sys 0m0.018s 00:03:55.803 ************************************ 00:03:55.803 END TEST env_memory 00:03:55.803 ************************************ 00:03:55.803 17:33:45 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:55.803 17:33:45 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:56.061 17:33:45 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:56.061 17:33:45 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:56.061 17:33:45 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:56.061 17:33:45 env -- common/autotest_common.sh@10 -- # set +x 00:03:56.061 ************************************ 00:03:56.061 START TEST env_vtophys 00:03:56.061 ************************************ 00:03:56.061 17:33:45 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:56.061 EAL: lib.eal log level changed from notice to debug 00:03:56.061 EAL: Detected lcore 0 as core 0 on socket 0 00:03:56.061 EAL: Detected lcore 1 as core 0 on socket 0 00:03:56.061 EAL: Detected lcore 2 as core 0 on socket 0 00:03:56.061 EAL: Detected lcore 3 as core 0 on socket 0 00:03:56.061 EAL: Detected lcore 4 as core 0 on socket 0 00:03:56.061 EAL: Detected lcore 5 as core 0 on socket 0 00:03:56.061 EAL: Detected lcore 6 as core 0 on socket 0 00:03:56.061 EAL: Detected lcore 7 as core 0 on socket 0 00:03:56.061 EAL: Detected lcore 8 as core 0 on socket 0 00:03:56.061 EAL: Detected lcore 9 as core 0 on socket 0 00:03:56.061 EAL: Maximum logical cores by configuration: 128 00:03:56.061 EAL: Detected CPU lcores: 10 00:03:56.061 EAL: Detected NUMA nodes: 1 00:03:56.061 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:56.061 EAL: Detected shared linkage of DPDK 00:03:56.061 EAL: No shared files mode enabled, IPC will be disabled 00:03:56.061 EAL: Selected IOVA mode 'PA' 00:03:56.061 EAL: Probing VFIO support... 00:03:56.061 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:56.061 EAL: VFIO modules not loaded, skipping VFIO support... 00:03:56.061 EAL: Ask a virtual area of 0x2e000 bytes 00:03:56.061 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:56.061 EAL: Setting up physically contiguous memory... 00:03:56.061 EAL: Setting maximum number of open files to 524288 00:03:56.061 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:56.061 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:56.061 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.061 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:56.061 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.061 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.061 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:56.061 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:56.061 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.061 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:56.061 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.061 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.061 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:56.061 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:56.061 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.061 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:56.061 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.061 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.061 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:56.061 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:56.061 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.061 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:56.061 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.061 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.061 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:56.061 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:56.061 EAL: Hugepages will be freed exactly as allocated. 00:03:56.061 EAL: No shared files mode enabled, IPC is disabled 00:03:56.061 EAL: No shared files mode enabled, IPC is disabled 00:03:56.061 EAL: TSC frequency is ~2600000 KHz 00:03:56.061 EAL: Main lcore 0 is ready (tid=7ff1dd6cea40;cpuset=[0]) 00:03:56.061 EAL: Trying to obtain current memory policy. 00:03:56.061 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.061 EAL: Restoring previous memory policy: 0 00:03:56.061 EAL: request: mp_malloc_sync 00:03:56.061 EAL: No shared files mode enabled, IPC is disabled 00:03:56.061 EAL: Heap on socket 0 was expanded by 2MB 00:03:56.061 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:56.061 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:56.061 EAL: Mem event callback 'spdk:(nil)' registered 00:03:56.061 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:03:56.061 00:03:56.061 00:03:56.061 CUnit - A unit testing framework for C - Version 2.1-3 00:03:56.061 http://cunit.sourceforge.net/ 00:03:56.061 00:03:56.061 00:03:56.061 Suite: components_suite 00:03:56.319 Test: vtophys_malloc_test ...passed 00:03:56.319 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:56.319 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.319 EAL: Restoring previous memory policy: 4 00:03:56.319 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.319 EAL: request: mp_malloc_sync 00:03:56.319 EAL: No shared files mode enabled, IPC is disabled 00:03:56.319 EAL: Heap on socket 0 was expanded by 4MB 00:03:56.319 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.319 EAL: request: mp_malloc_sync 00:03:56.319 EAL: No shared files mode enabled, IPC is disabled 00:03:56.319 EAL: Heap on socket 0 was shrunk by 4MB 00:03:56.319 EAL: Trying to obtain current memory policy. 00:03:56.319 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.319 EAL: Restoring previous memory policy: 4 00:03:56.319 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.319 EAL: request: mp_malloc_sync 00:03:56.319 EAL: No shared files mode enabled, IPC is disabled 00:03:56.319 EAL: Heap on socket 0 was expanded by 6MB 00:03:56.319 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.319 EAL: request: mp_malloc_sync 00:03:56.319 EAL: No shared files mode enabled, IPC is disabled 00:03:56.319 EAL: Heap on socket 0 was shrunk by 6MB 00:03:56.319 EAL: Trying to obtain current memory policy. 00:03:56.319 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.319 EAL: Restoring previous memory policy: 4 00:03:56.319 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.319 EAL: request: mp_malloc_sync 00:03:56.319 EAL: No shared files mode enabled, IPC is disabled 00:03:56.319 EAL: Heap on socket 0 was expanded by 10MB 00:03:56.319 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.319 EAL: request: mp_malloc_sync 00:03:56.319 EAL: No shared files mode enabled, IPC is disabled 00:03:56.319 EAL: Heap on socket 0 was shrunk by 10MB 00:03:56.576 EAL: Trying to obtain current memory policy. 00:03:56.576 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.576 EAL: Restoring previous memory policy: 4 00:03:56.576 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.576 EAL: request: mp_malloc_sync 00:03:56.576 EAL: No shared files mode enabled, IPC is disabled 00:03:56.576 EAL: Heap on socket 0 was expanded by 18MB 00:03:56.577 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.577 EAL: request: mp_malloc_sync 00:03:56.577 EAL: No shared files mode enabled, IPC is disabled 00:03:56.577 EAL: Heap on socket 0 was shrunk by 18MB 00:03:56.577 EAL: Trying to obtain current memory policy. 00:03:56.577 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.577 EAL: Restoring previous memory policy: 4 00:03:56.577 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.577 EAL: request: mp_malloc_sync 00:03:56.577 EAL: No shared files mode enabled, IPC is disabled 00:03:56.577 EAL: Heap on socket 0 was expanded by 34MB 00:03:56.577 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.577 EAL: request: mp_malloc_sync 00:03:56.577 EAL: No shared files mode enabled, IPC is disabled 00:03:56.577 EAL: Heap on socket 0 was shrunk by 34MB 00:03:56.577 EAL: Trying to obtain current memory policy. 00:03:56.577 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.577 EAL: Restoring previous memory policy: 4 00:03:56.577 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.577 EAL: request: mp_malloc_sync 00:03:56.577 EAL: No shared files mode enabled, IPC is disabled 00:03:56.577 EAL: Heap on socket 0 was expanded by 66MB 00:03:56.577 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.577 EAL: request: mp_malloc_sync 00:03:56.577 EAL: No shared files mode enabled, IPC is disabled 00:03:56.577 EAL: Heap on socket 0 was shrunk by 66MB 00:03:56.577 EAL: Trying to obtain current memory policy. 00:03:56.577 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.835 EAL: Restoring previous memory policy: 4 00:03:56.835 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.835 EAL: request: mp_malloc_sync 00:03:56.835 EAL: No shared files mode enabled, IPC is disabled 00:03:56.835 EAL: Heap on socket 0 was expanded by 130MB 00:03:56.835 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.835 EAL: request: mp_malloc_sync 00:03:56.835 EAL: No shared files mode enabled, IPC is disabled 00:03:56.835 EAL: Heap on socket 0 was shrunk by 130MB 00:03:57.093 EAL: Trying to obtain current memory policy. 00:03:57.093 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.093 EAL: Restoring previous memory policy: 4 00:03:57.093 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.093 EAL: request: mp_malloc_sync 00:03:57.093 EAL: No shared files mode enabled, IPC is disabled 00:03:57.093 EAL: Heap on socket 0 was expanded by 258MB 00:03:57.350 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.351 EAL: request: mp_malloc_sync 00:03:57.351 EAL: No shared files mode enabled, IPC is disabled 00:03:57.351 EAL: Heap on socket 0 was shrunk by 258MB 00:03:57.608 EAL: Trying to obtain current memory policy. 00:03:57.608 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.608 EAL: Restoring previous memory policy: 4 00:03:57.608 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.608 EAL: request: mp_malloc_sync 00:03:57.608 EAL: No shared files mode enabled, IPC is disabled 00:03:57.608 EAL: Heap on socket 0 was expanded by 514MB 00:03:58.179 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.437 EAL: request: mp_malloc_sync 00:03:58.437 EAL: No shared files mode enabled, IPC is disabled 00:03:58.437 EAL: Heap on socket 0 was shrunk by 514MB 00:03:59.004 EAL: Trying to obtain current memory policy. 00:03:59.004 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.004 EAL: Restoring previous memory policy: 4 00:03:59.004 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.004 EAL: request: mp_malloc_sync 00:03:59.004 EAL: No shared files mode enabled, IPC is disabled 00:03:59.004 EAL: Heap on socket 0 was expanded by 1026MB 00:04:00.385 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.385 EAL: request: mp_malloc_sync 00:04:00.386 EAL: No shared files mode enabled, IPC is disabled 00:04:00.386 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:00.951 passed 00:04:00.951 00:04:00.951 Run Summary: Type Total Ran Passed Failed Inactive 00:04:00.951 suites 1 1 n/a 0 0 00:04:00.951 tests 2 2 2 0 0 00:04:00.951 asserts 5775 5775 5775 0 n/a 00:04:00.951 00:04:00.951 Elapsed time = 4.811 seconds 00:04:00.951 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.951 EAL: request: mp_malloc_sync 00:04:00.951 EAL: No shared files mode enabled, IPC is disabled 00:04:00.951 EAL: Heap on socket 0 was shrunk by 2MB 00:04:00.951 EAL: No shared files mode enabled, IPC is disabled 00:04:00.951 EAL: No shared files mode enabled, IPC is disabled 00:04:00.951 EAL: No shared files mode enabled, IPC is disabled 00:04:00.951 00:04:00.951 real 0m5.074s 00:04:00.951 user 0m4.305s 00:04:00.951 sys 0m0.614s 00:04:00.951 ************************************ 00:04:00.951 END TEST env_vtophys 00:04:00.951 ************************************ 00:04:00.951 17:33:50 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:00.951 17:33:50 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:00.951 17:33:50 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:00.951 17:33:50 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:00.951 17:33:50 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:00.951 17:33:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:00.951 ************************************ 00:04:00.951 START TEST env_pci 00:04:00.951 ************************************ 00:04:00.951 17:33:50 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:01.209 00:04:01.209 00:04:01.209 CUnit - A unit testing framework for C - Version 2.1-3 00:04:01.209 http://cunit.sourceforge.net/ 00:04:01.209 00:04:01.209 00:04:01.209 Suite: pci 00:04:01.209 Test: pci_hook ...[2024-10-13 17:33:50.779266] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57421 has claimed it 00:04:01.209 passed 00:04:01.209 00:04:01.209 Run Summary: Type Total Ran Passed Failed Inactive 00:04:01.209 suites 1 1 n/a 0 0 00:04:01.209 tests 1 1 1 0 0 00:04:01.209 asserts 25 25 25 0 n/a 00:04:01.209 00:04:01.209 Elapsed time = 0.005 seconds 00:04:01.209 EAL: Cannot find device (10000:00:01.0) 00:04:01.209 EAL: Failed to attach device on primary process 00:04:01.209 00:04:01.209 real 0m0.062s 00:04:01.209 user 0m0.029s 00:04:01.209 sys 0m0.034s 00:04:01.209 17:33:50 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:01.209 ************************************ 00:04:01.209 END TEST env_pci 00:04:01.209 ************************************ 00:04:01.209 17:33:50 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:01.209 17:33:50 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:01.209 17:33:50 env -- env/env.sh@15 -- # uname 00:04:01.209 17:33:50 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:01.209 17:33:50 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:01.209 17:33:50 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:01.209 17:33:50 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:01.209 17:33:50 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:01.209 17:33:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:01.209 ************************************ 00:04:01.209 START TEST env_dpdk_post_init 00:04:01.209 ************************************ 00:04:01.209 17:33:50 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:01.209 EAL: Detected CPU lcores: 10 00:04:01.209 EAL: Detected NUMA nodes: 1 00:04:01.209 EAL: Detected shared linkage of DPDK 00:04:01.209 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:01.209 EAL: Selected IOVA mode 'PA' 00:04:01.467 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:01.467 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:01.467 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:01.467 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:01.467 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:01.467 Starting DPDK initialization... 00:04:01.467 Starting SPDK post initialization... 00:04:01.467 SPDK NVMe probe 00:04:01.467 Attaching to 0000:00:10.0 00:04:01.467 Attaching to 0000:00:11.0 00:04:01.467 Attaching to 0000:00:12.0 00:04:01.467 Attaching to 0000:00:13.0 00:04:01.467 Attached to 0000:00:10.0 00:04:01.467 Attached to 0000:00:11.0 00:04:01.467 Attached to 0000:00:13.0 00:04:01.467 Attached to 0000:00:12.0 00:04:01.467 Cleaning up... 00:04:01.467 00:04:01.467 real 0m0.239s 00:04:01.467 user 0m0.070s 00:04:01.467 sys 0m0.071s 00:04:01.467 ************************************ 00:04:01.467 END TEST env_dpdk_post_init 00:04:01.467 17:33:51 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:01.467 17:33:51 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:01.467 ************************************ 00:04:01.467 17:33:51 env -- env/env.sh@26 -- # uname 00:04:01.467 17:33:51 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:01.467 17:33:51 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:01.467 17:33:51 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:01.468 17:33:51 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:01.468 17:33:51 env -- common/autotest_common.sh@10 -- # set +x 00:04:01.468 ************************************ 00:04:01.468 START TEST env_mem_callbacks 00:04:01.468 ************************************ 00:04:01.468 17:33:51 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:01.468 EAL: Detected CPU lcores: 10 00:04:01.468 EAL: Detected NUMA nodes: 1 00:04:01.468 EAL: Detected shared linkage of DPDK 00:04:01.468 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:01.468 EAL: Selected IOVA mode 'PA' 00:04:01.727 00:04:01.727 00:04:01.727 CUnit - A unit testing framework for C - Version 2.1-3 00:04:01.727 http://cunit.sourceforge.net/ 00:04:01.727 00:04:01.727 00:04:01.727 Suite: memory 00:04:01.727 Test: test ... 00:04:01.727 register 0x200000200000 2097152 00:04:01.727 malloc 3145728 00:04:01.727 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:01.727 register 0x200000400000 4194304 00:04:01.727 buf 0x2000004fffc0 len 3145728 PASSED 00:04:01.727 malloc 64 00:04:01.727 buf 0x2000004ffec0 len 64 PASSED 00:04:01.727 malloc 4194304 00:04:01.727 register 0x200000800000 6291456 00:04:01.727 buf 0x2000009fffc0 len 4194304 PASSED 00:04:01.727 free 0x2000004fffc0 3145728 00:04:01.727 free 0x2000004ffec0 64 00:04:01.727 unregister 0x200000400000 4194304 PASSED 00:04:01.727 free 0x2000009fffc0 4194304 00:04:01.727 unregister 0x200000800000 6291456 PASSED 00:04:01.728 malloc 8388608 00:04:01.728 register 0x200000400000 10485760 00:04:01.728 buf 0x2000005fffc0 len 8388608 PASSED 00:04:01.728 free 0x2000005fffc0 8388608 00:04:01.728 unregister 0x200000400000 10485760 PASSED 00:04:01.728 passed 00:04:01.728 00:04:01.728 Run Summary: Type Total Ran Passed Failed Inactive 00:04:01.728 suites 1 1 n/a 0 0 00:04:01.728 tests 1 1 1 0 0 00:04:01.728 asserts 15 15 15 0 n/a 00:04:01.728 00:04:01.728 Elapsed time = 0.049 seconds 00:04:01.728 00:04:01.728 real 0m0.215s 00:04:01.728 user 0m0.067s 00:04:01.728 sys 0m0.046s 00:04:01.728 17:33:51 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:01.728 ************************************ 00:04:01.728 END TEST env_mem_callbacks 00:04:01.728 ************************************ 00:04:01.728 17:33:51 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:01.728 00:04:01.728 real 0m6.264s 00:04:01.728 user 0m4.866s 00:04:01.728 sys 0m1.003s 00:04:01.728 17:33:51 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:01.728 ************************************ 00:04:01.728 END TEST env 00:04:01.728 ************************************ 00:04:01.728 17:33:51 env -- common/autotest_common.sh@10 -- # set +x 00:04:01.728 17:33:51 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:01.728 17:33:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:01.728 17:33:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:01.728 17:33:51 -- common/autotest_common.sh@10 -- # set +x 00:04:01.728 ************************************ 00:04:01.728 START TEST rpc 00:04:01.728 ************************************ 00:04:01.728 17:33:51 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:01.728 * Looking for test storage... 00:04:01.728 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:01.728 17:33:51 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:01.728 17:33:51 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:01.728 17:33:51 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:01.986 17:33:51 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:01.986 17:33:51 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:01.986 17:33:51 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:01.986 17:33:51 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:01.986 17:33:51 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:01.986 17:33:51 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:01.986 17:33:51 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:01.986 17:33:51 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:01.986 17:33:51 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:01.986 17:33:51 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:01.986 17:33:51 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:01.986 17:33:51 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:01.986 17:33:51 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:01.986 17:33:51 rpc -- scripts/common.sh@345 -- # : 1 00:04:01.986 17:33:51 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:01.986 17:33:51 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:01.986 17:33:51 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:01.986 17:33:51 rpc -- scripts/common.sh@353 -- # local d=1 00:04:01.986 17:33:51 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:01.986 17:33:51 rpc -- scripts/common.sh@355 -- # echo 1 00:04:01.986 17:33:51 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:01.986 17:33:51 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:01.986 17:33:51 rpc -- scripts/common.sh@353 -- # local d=2 00:04:01.986 17:33:51 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:01.986 17:33:51 rpc -- scripts/common.sh@355 -- # echo 2 00:04:01.986 17:33:51 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:01.986 17:33:51 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:01.986 17:33:51 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:01.986 17:33:51 rpc -- scripts/common.sh@368 -- # return 0 00:04:01.986 17:33:51 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:01.986 17:33:51 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:01.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.986 --rc genhtml_branch_coverage=1 00:04:01.986 --rc genhtml_function_coverage=1 00:04:01.986 --rc genhtml_legend=1 00:04:01.986 --rc geninfo_all_blocks=1 00:04:01.986 --rc geninfo_unexecuted_blocks=1 00:04:01.986 00:04:01.986 ' 00:04:01.986 17:33:51 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:01.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.986 --rc genhtml_branch_coverage=1 00:04:01.986 --rc genhtml_function_coverage=1 00:04:01.986 --rc genhtml_legend=1 00:04:01.986 --rc geninfo_all_blocks=1 00:04:01.986 --rc geninfo_unexecuted_blocks=1 00:04:01.986 00:04:01.986 ' 00:04:01.986 17:33:51 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:01.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.986 --rc genhtml_branch_coverage=1 00:04:01.986 --rc genhtml_function_coverage=1 00:04:01.986 --rc genhtml_legend=1 00:04:01.986 --rc geninfo_all_blocks=1 00:04:01.986 --rc geninfo_unexecuted_blocks=1 00:04:01.986 00:04:01.986 ' 00:04:01.986 17:33:51 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:01.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.986 --rc genhtml_branch_coverage=1 00:04:01.986 --rc genhtml_function_coverage=1 00:04:01.986 --rc genhtml_legend=1 00:04:01.986 --rc geninfo_all_blocks=1 00:04:01.986 --rc geninfo_unexecuted_blocks=1 00:04:01.986 00:04:01.986 ' 00:04:01.986 17:33:51 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57548 00:04:01.986 17:33:51 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:01.986 17:33:51 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57548 00:04:01.986 17:33:51 rpc -- common/autotest_common.sh@831 -- # '[' -z 57548 ']' 00:04:01.986 17:33:51 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:01.986 17:33:51 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:01.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:01.986 17:33:51 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:01.986 17:33:51 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:01.986 17:33:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.986 17:33:51 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:01.986 [2024-10-13 17:33:51.674819] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:04:01.986 [2024-10-13 17:33:51.674949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57548 ] 00:04:02.244 [2024-10-13 17:33:51.827209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.244 [2024-10-13 17:33:51.919510] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:02.244 [2024-10-13 17:33:51.919569] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57548' to capture a snapshot of events at runtime. 00:04:02.244 [2024-10-13 17:33:51.919579] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:02.244 [2024-10-13 17:33:51.919588] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:02.244 [2024-10-13 17:33:51.919596] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57548 for offline analysis/debug. 00:04:02.244 [2024-10-13 17:33:51.920446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.810 17:33:52 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:02.810 17:33:52 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:02.810 17:33:52 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:02.811 17:33:52 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:02.811 17:33:52 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:02.811 17:33:52 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:02.811 17:33:52 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:02.811 17:33:52 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:02.811 17:33:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.811 ************************************ 00:04:02.811 START TEST rpc_integrity 00:04:02.811 ************************************ 00:04:02.811 17:33:52 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:02.811 17:33:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:02.811 17:33:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.811 17:33:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.811 17:33:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:02.811 17:33:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:02.811 17:33:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:02.811 17:33:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:02.811 17:33:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:02.811 17:33:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.811 17:33:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.811 17:33:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:02.811 17:33:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:02.811 17:33:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:02.811 17:33:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.811 17:33:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.811 17:33:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:02.811 17:33:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:02.811 { 00:04:02.811 "name": "Malloc0", 00:04:02.811 "aliases": [ 00:04:02.811 "51c8bc26-9a95-4eeb-a918-78703c994185" 00:04:02.811 ], 00:04:02.811 "product_name": "Malloc disk", 00:04:02.811 "block_size": 512, 00:04:02.811 "num_blocks": 16384, 00:04:02.811 "uuid": "51c8bc26-9a95-4eeb-a918-78703c994185", 00:04:02.811 "assigned_rate_limits": { 00:04:02.811 "rw_ios_per_sec": 0, 00:04:02.811 "rw_mbytes_per_sec": 0, 00:04:02.811 "r_mbytes_per_sec": 0, 00:04:02.811 "w_mbytes_per_sec": 0 00:04:02.811 }, 00:04:02.811 "claimed": false, 00:04:02.811 "zoned": false, 00:04:02.811 "supported_io_types": { 00:04:02.811 "read": true, 00:04:02.811 "write": true, 00:04:02.811 "unmap": true, 00:04:02.811 "flush": true, 00:04:02.811 "reset": true, 00:04:02.811 "nvme_admin": false, 00:04:02.811 "nvme_io": false, 00:04:02.811 "nvme_io_md": false, 00:04:02.811 "write_zeroes": true, 00:04:02.811 "zcopy": true, 00:04:02.811 "get_zone_info": false, 00:04:02.811 "zone_management": false, 00:04:02.811 "zone_append": false, 00:04:02.811 "compare": false, 00:04:02.811 "compare_and_write": false, 00:04:02.811 "abort": true, 00:04:02.811 "seek_hole": false, 00:04:02.811 "seek_data": false, 00:04:02.811 "copy": true, 00:04:02.811 "nvme_iov_md": false 00:04:02.811 }, 00:04:02.811 "memory_domains": [ 00:04:02.811 { 00:04:02.811 "dma_device_id": "system", 00:04:02.811 "dma_device_type": 1 00:04:02.811 }, 00:04:02.811 { 00:04:02.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.811 "dma_device_type": 2 00:04:02.811 } 00:04:02.811 ], 00:04:02.811 "driver_specific": {} 00:04:02.811 } 00:04:02.811 ]' 00:04:02.811 17:33:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:02.811 17:33:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:02.811 17:33:52 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:02.811 17:33:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.811 17:33:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.811 [2024-10-13 17:33:52.613365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:02.811 [2024-10-13 17:33:52.613420] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:02.811 [2024-10-13 17:33:52.613449] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:02.811 [2024-10-13 17:33:52.613461] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:02.811 [2024-10-13 17:33:52.615592] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:02.811 [2024-10-13 17:33:52.615628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:02.811 Passthru0 00:04:02.811 17:33:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:02.811 17:33:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:02.811 17:33:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.811 17:33:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.070 17:33:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.070 17:33:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:03.070 { 00:04:03.070 "name": "Malloc0", 00:04:03.070 "aliases": [ 00:04:03.070 "51c8bc26-9a95-4eeb-a918-78703c994185" 00:04:03.070 ], 00:04:03.070 "product_name": "Malloc disk", 00:04:03.070 "block_size": 512, 00:04:03.070 "num_blocks": 16384, 00:04:03.070 "uuid": "51c8bc26-9a95-4eeb-a918-78703c994185", 00:04:03.070 "assigned_rate_limits": { 00:04:03.070 "rw_ios_per_sec": 0, 00:04:03.070 "rw_mbytes_per_sec": 0, 00:04:03.070 "r_mbytes_per_sec": 0, 00:04:03.070 "w_mbytes_per_sec": 0 00:04:03.070 }, 00:04:03.070 "claimed": true, 00:04:03.070 "claim_type": "exclusive_write", 00:04:03.070 "zoned": false, 00:04:03.070 "supported_io_types": { 00:04:03.070 "read": true, 00:04:03.070 "write": true, 00:04:03.070 "unmap": true, 00:04:03.070 "flush": true, 00:04:03.070 "reset": true, 00:04:03.070 "nvme_admin": false, 00:04:03.070 "nvme_io": false, 00:04:03.070 "nvme_io_md": false, 00:04:03.070 "write_zeroes": true, 00:04:03.070 "zcopy": true, 00:04:03.070 "get_zone_info": false, 00:04:03.070 "zone_management": false, 00:04:03.070 "zone_append": false, 00:04:03.070 "compare": false, 00:04:03.070 "compare_and_write": false, 00:04:03.070 "abort": true, 00:04:03.070 "seek_hole": false, 00:04:03.070 "seek_data": false, 00:04:03.070 "copy": true, 00:04:03.070 "nvme_iov_md": false 00:04:03.070 }, 00:04:03.070 "memory_domains": [ 00:04:03.070 { 00:04:03.070 "dma_device_id": "system", 00:04:03.070 "dma_device_type": 1 00:04:03.070 }, 00:04:03.070 { 00:04:03.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.070 "dma_device_type": 2 00:04:03.070 } 00:04:03.070 ], 00:04:03.070 "driver_specific": {} 00:04:03.070 }, 00:04:03.070 { 00:04:03.070 "name": "Passthru0", 00:04:03.070 "aliases": [ 00:04:03.070 "a7a5cbc1-b8e4-5658-9e60-12bbe9348fb3" 00:04:03.070 ], 00:04:03.070 "product_name": "passthru", 00:04:03.070 "block_size": 512, 00:04:03.070 "num_blocks": 16384, 00:04:03.070 "uuid": "a7a5cbc1-b8e4-5658-9e60-12bbe9348fb3", 00:04:03.070 "assigned_rate_limits": { 00:04:03.070 "rw_ios_per_sec": 0, 00:04:03.070 "rw_mbytes_per_sec": 0, 00:04:03.070 "r_mbytes_per_sec": 0, 00:04:03.070 "w_mbytes_per_sec": 0 00:04:03.070 }, 00:04:03.070 "claimed": false, 00:04:03.070 "zoned": false, 00:04:03.070 "supported_io_types": { 00:04:03.070 "read": true, 00:04:03.070 "write": true, 00:04:03.070 "unmap": true, 00:04:03.070 "flush": true, 00:04:03.070 "reset": true, 00:04:03.070 "nvme_admin": false, 00:04:03.070 "nvme_io": false, 00:04:03.070 "nvme_io_md": false, 00:04:03.070 "write_zeroes": true, 00:04:03.070 "zcopy": true, 00:04:03.070 "get_zone_info": false, 00:04:03.070 "zone_management": false, 00:04:03.070 "zone_append": false, 00:04:03.070 "compare": false, 00:04:03.070 "compare_and_write": false, 00:04:03.070 "abort": true, 00:04:03.070 "seek_hole": false, 00:04:03.070 "seek_data": false, 00:04:03.070 "copy": true, 00:04:03.070 "nvme_iov_md": false 00:04:03.070 }, 00:04:03.070 "memory_domains": [ 00:04:03.070 { 00:04:03.070 "dma_device_id": "system", 00:04:03.070 "dma_device_type": 1 00:04:03.070 }, 00:04:03.070 { 00:04:03.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.070 "dma_device_type": 2 00:04:03.070 } 00:04:03.070 ], 00:04:03.070 "driver_specific": { 00:04:03.070 "passthru": { 00:04:03.070 "name": "Passthru0", 00:04:03.070 "base_bdev_name": "Malloc0" 00:04:03.070 } 00:04:03.070 } 00:04:03.070 } 00:04:03.070 ]' 00:04:03.070 17:33:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:03.070 17:33:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:03.070 17:33:52 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:03.070 17:33:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.070 17:33:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.070 17:33:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.070 17:33:52 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:03.070 17:33:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.070 17:33:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.070 17:33:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.070 17:33:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:03.070 17:33:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.070 17:33:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.070 17:33:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.070 17:33:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:03.070 17:33:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:03.070 17:33:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:03.070 00:04:03.070 real 0m0.251s 00:04:03.070 user 0m0.131s 00:04:03.071 sys 0m0.034s 00:04:03.071 ************************************ 00:04:03.071 END TEST rpc_integrity 00:04:03.071 ************************************ 00:04:03.071 17:33:52 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:03.071 17:33:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.071 17:33:52 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:03.071 17:33:52 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:03.071 17:33:52 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:03.071 17:33:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.071 ************************************ 00:04:03.071 START TEST rpc_plugins 00:04:03.071 ************************************ 00:04:03.071 17:33:52 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:03.071 17:33:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:03.071 17:33:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.071 17:33:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:03.071 17:33:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.071 17:33:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:03.071 17:33:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:03.071 17:33:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.071 17:33:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:03.071 17:33:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.071 17:33:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:03.071 { 00:04:03.071 "name": "Malloc1", 00:04:03.071 "aliases": [ 00:04:03.071 "40d01606-4fda-48be-a87f-496e31bb2e31" 00:04:03.071 ], 00:04:03.071 "product_name": "Malloc disk", 00:04:03.071 "block_size": 4096, 00:04:03.071 "num_blocks": 256, 00:04:03.071 "uuid": "40d01606-4fda-48be-a87f-496e31bb2e31", 00:04:03.071 "assigned_rate_limits": { 00:04:03.071 "rw_ios_per_sec": 0, 00:04:03.071 "rw_mbytes_per_sec": 0, 00:04:03.071 "r_mbytes_per_sec": 0, 00:04:03.071 "w_mbytes_per_sec": 0 00:04:03.071 }, 00:04:03.071 "claimed": false, 00:04:03.071 "zoned": false, 00:04:03.071 "supported_io_types": { 00:04:03.071 "read": true, 00:04:03.071 "write": true, 00:04:03.071 "unmap": true, 00:04:03.071 "flush": true, 00:04:03.071 "reset": true, 00:04:03.071 "nvme_admin": false, 00:04:03.071 "nvme_io": false, 00:04:03.071 "nvme_io_md": false, 00:04:03.071 "write_zeroes": true, 00:04:03.071 "zcopy": true, 00:04:03.071 "get_zone_info": false, 00:04:03.071 "zone_management": false, 00:04:03.071 "zone_append": false, 00:04:03.071 "compare": false, 00:04:03.071 "compare_and_write": false, 00:04:03.071 "abort": true, 00:04:03.071 "seek_hole": false, 00:04:03.071 "seek_data": false, 00:04:03.071 "copy": true, 00:04:03.071 "nvme_iov_md": false 00:04:03.071 }, 00:04:03.071 "memory_domains": [ 00:04:03.071 { 00:04:03.071 "dma_device_id": "system", 00:04:03.071 "dma_device_type": 1 00:04:03.071 }, 00:04:03.071 { 00:04:03.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.071 "dma_device_type": 2 00:04:03.071 } 00:04:03.071 ], 00:04:03.071 "driver_specific": {} 00:04:03.071 } 00:04:03.071 ]' 00:04:03.071 17:33:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:03.071 17:33:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:03.071 17:33:52 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:03.071 17:33:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.071 17:33:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:03.071 17:33:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.071 17:33:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:03.071 17:33:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.071 17:33:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:03.071 17:33:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.071 17:33:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:03.071 17:33:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:03.329 17:33:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:03.329 00:04:03.329 real 0m0.114s 00:04:03.329 user 0m0.061s 00:04:03.329 sys 0m0.018s 00:04:03.329 17:33:52 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:03.329 17:33:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:03.329 ************************************ 00:04:03.329 END TEST rpc_plugins 00:04:03.329 ************************************ 00:04:03.329 17:33:52 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:03.329 17:33:52 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:03.329 17:33:52 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:03.329 17:33:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.329 ************************************ 00:04:03.329 START TEST rpc_trace_cmd_test 00:04:03.329 ************************************ 00:04:03.329 17:33:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:03.329 17:33:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:03.329 17:33:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:03.329 17:33:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.329 17:33:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:03.329 17:33:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.329 17:33:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:03.329 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57548", 00:04:03.329 "tpoint_group_mask": "0x8", 00:04:03.329 "iscsi_conn": { 00:04:03.329 "mask": "0x2", 00:04:03.329 "tpoint_mask": "0x0" 00:04:03.329 }, 00:04:03.329 "scsi": { 00:04:03.329 "mask": "0x4", 00:04:03.329 "tpoint_mask": "0x0" 00:04:03.329 }, 00:04:03.329 "bdev": { 00:04:03.329 "mask": "0x8", 00:04:03.329 "tpoint_mask": "0xffffffffffffffff" 00:04:03.329 }, 00:04:03.329 "nvmf_rdma": { 00:04:03.329 "mask": "0x10", 00:04:03.329 "tpoint_mask": "0x0" 00:04:03.329 }, 00:04:03.329 "nvmf_tcp": { 00:04:03.329 "mask": "0x20", 00:04:03.329 "tpoint_mask": "0x0" 00:04:03.329 }, 00:04:03.329 "ftl": { 00:04:03.329 "mask": "0x40", 00:04:03.329 "tpoint_mask": "0x0" 00:04:03.329 }, 00:04:03.329 "blobfs": { 00:04:03.329 "mask": "0x80", 00:04:03.329 "tpoint_mask": "0x0" 00:04:03.329 }, 00:04:03.329 "dsa": { 00:04:03.329 "mask": "0x200", 00:04:03.329 "tpoint_mask": "0x0" 00:04:03.329 }, 00:04:03.329 "thread": { 00:04:03.329 "mask": "0x400", 00:04:03.329 "tpoint_mask": "0x0" 00:04:03.329 }, 00:04:03.329 "nvme_pcie": { 00:04:03.329 "mask": "0x800", 00:04:03.329 "tpoint_mask": "0x0" 00:04:03.329 }, 00:04:03.329 "iaa": { 00:04:03.329 "mask": "0x1000", 00:04:03.329 "tpoint_mask": "0x0" 00:04:03.329 }, 00:04:03.329 "nvme_tcp": { 00:04:03.329 "mask": "0x2000", 00:04:03.329 "tpoint_mask": "0x0" 00:04:03.329 }, 00:04:03.329 "bdev_nvme": { 00:04:03.329 "mask": "0x4000", 00:04:03.329 "tpoint_mask": "0x0" 00:04:03.329 }, 00:04:03.329 "sock": { 00:04:03.329 "mask": "0x8000", 00:04:03.329 "tpoint_mask": "0x0" 00:04:03.329 }, 00:04:03.329 "blob": { 00:04:03.329 "mask": "0x10000", 00:04:03.329 "tpoint_mask": "0x0" 00:04:03.329 }, 00:04:03.329 "bdev_raid": { 00:04:03.329 "mask": "0x20000", 00:04:03.329 "tpoint_mask": "0x0" 00:04:03.329 }, 00:04:03.329 "scheduler": { 00:04:03.329 "mask": "0x40000", 00:04:03.329 "tpoint_mask": "0x0" 00:04:03.329 } 00:04:03.329 }' 00:04:03.329 17:33:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:03.329 17:33:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:03.329 17:33:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:03.329 17:33:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:03.329 17:33:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:03.329 17:33:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:03.329 17:33:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:03.329 17:33:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:03.329 17:33:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:03.329 17:33:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:03.329 00:04:03.329 real 0m0.163s 00:04:03.329 user 0m0.132s 00:04:03.329 sys 0m0.020s 00:04:03.330 17:33:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:03.330 17:33:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:03.330 ************************************ 00:04:03.330 END TEST rpc_trace_cmd_test 00:04:03.330 ************************************ 00:04:03.588 17:33:53 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:03.588 17:33:53 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:03.588 17:33:53 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:03.588 17:33:53 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:03.588 17:33:53 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:03.588 17:33:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.588 ************************************ 00:04:03.588 START TEST rpc_daemon_integrity 00:04:03.588 ************************************ 00:04:03.588 17:33:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:03.588 17:33:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:03.588 17:33:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.588 17:33:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.588 17:33:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.588 17:33:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:03.588 17:33:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:03.588 17:33:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:03.588 17:33:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:03.588 17:33:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.588 17:33:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.588 17:33:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.588 17:33:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:03.588 17:33:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:03.588 17:33:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.588 17:33:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.588 17:33:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.588 17:33:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:03.588 { 00:04:03.588 "name": "Malloc2", 00:04:03.588 "aliases": [ 00:04:03.588 "afcea954-83e8-4076-901f-94e98d9644d8" 00:04:03.588 ], 00:04:03.588 "product_name": "Malloc disk", 00:04:03.588 "block_size": 512, 00:04:03.588 "num_blocks": 16384, 00:04:03.588 "uuid": "afcea954-83e8-4076-901f-94e98d9644d8", 00:04:03.588 "assigned_rate_limits": { 00:04:03.588 "rw_ios_per_sec": 0, 00:04:03.588 "rw_mbytes_per_sec": 0, 00:04:03.588 "r_mbytes_per_sec": 0, 00:04:03.588 "w_mbytes_per_sec": 0 00:04:03.588 }, 00:04:03.589 "claimed": false, 00:04:03.589 "zoned": false, 00:04:03.589 "supported_io_types": { 00:04:03.589 "read": true, 00:04:03.589 "write": true, 00:04:03.589 "unmap": true, 00:04:03.589 "flush": true, 00:04:03.589 "reset": true, 00:04:03.589 "nvme_admin": false, 00:04:03.589 "nvme_io": false, 00:04:03.589 "nvme_io_md": false, 00:04:03.589 "write_zeroes": true, 00:04:03.589 "zcopy": true, 00:04:03.589 "get_zone_info": false, 00:04:03.589 "zone_management": false, 00:04:03.589 "zone_append": false, 00:04:03.589 "compare": false, 00:04:03.589 "compare_and_write": false, 00:04:03.589 "abort": true, 00:04:03.589 "seek_hole": false, 00:04:03.589 "seek_data": false, 00:04:03.589 "copy": true, 00:04:03.589 "nvme_iov_md": false 00:04:03.589 }, 00:04:03.589 "memory_domains": [ 00:04:03.589 { 00:04:03.589 "dma_device_id": "system", 00:04:03.589 "dma_device_type": 1 00:04:03.589 }, 00:04:03.589 { 00:04:03.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.589 "dma_device_type": 2 00:04:03.589 } 00:04:03.589 ], 00:04:03.589 "driver_specific": {} 00:04:03.589 } 00:04:03.589 ]' 00:04:03.589 17:33:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:03.589 17:33:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:03.589 17:33:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:03.589 17:33:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.589 17:33:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.589 [2024-10-13 17:33:53.271735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:03.589 [2024-10-13 17:33:53.271788] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:03.589 [2024-10-13 17:33:53.271806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:03.589 [2024-10-13 17:33:53.271816] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:03.589 [2024-10-13 17:33:53.273935] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:03.589 [2024-10-13 17:33:53.273970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:03.589 Passthru0 00:04:03.589 17:33:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.589 17:33:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:03.589 17:33:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.589 17:33:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.589 17:33:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.589 17:33:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:03.589 { 00:04:03.589 "name": "Malloc2", 00:04:03.589 "aliases": [ 00:04:03.589 "afcea954-83e8-4076-901f-94e98d9644d8" 00:04:03.589 ], 00:04:03.589 "product_name": "Malloc disk", 00:04:03.589 "block_size": 512, 00:04:03.589 "num_blocks": 16384, 00:04:03.589 "uuid": "afcea954-83e8-4076-901f-94e98d9644d8", 00:04:03.589 "assigned_rate_limits": { 00:04:03.589 "rw_ios_per_sec": 0, 00:04:03.589 "rw_mbytes_per_sec": 0, 00:04:03.589 "r_mbytes_per_sec": 0, 00:04:03.589 "w_mbytes_per_sec": 0 00:04:03.589 }, 00:04:03.589 "claimed": true, 00:04:03.589 "claim_type": "exclusive_write", 00:04:03.589 "zoned": false, 00:04:03.589 "supported_io_types": { 00:04:03.589 "read": true, 00:04:03.589 "write": true, 00:04:03.589 "unmap": true, 00:04:03.589 "flush": true, 00:04:03.589 "reset": true, 00:04:03.589 "nvme_admin": false, 00:04:03.589 "nvme_io": false, 00:04:03.589 "nvme_io_md": false, 00:04:03.589 "write_zeroes": true, 00:04:03.589 "zcopy": true, 00:04:03.589 "get_zone_info": false, 00:04:03.589 "zone_management": false, 00:04:03.589 "zone_append": false, 00:04:03.589 "compare": false, 00:04:03.589 "compare_and_write": false, 00:04:03.589 "abort": true, 00:04:03.589 "seek_hole": false, 00:04:03.589 "seek_data": false, 00:04:03.589 "copy": true, 00:04:03.589 "nvme_iov_md": false 00:04:03.589 }, 00:04:03.589 "memory_domains": [ 00:04:03.589 { 00:04:03.589 "dma_device_id": "system", 00:04:03.589 "dma_device_type": 1 00:04:03.589 }, 00:04:03.589 { 00:04:03.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.589 "dma_device_type": 2 00:04:03.589 } 00:04:03.589 ], 00:04:03.589 "driver_specific": {} 00:04:03.589 }, 00:04:03.589 { 00:04:03.589 "name": "Passthru0", 00:04:03.589 "aliases": [ 00:04:03.589 "fad80398-9988-5a3a-801e-ccf0160c0683" 00:04:03.589 ], 00:04:03.589 "product_name": "passthru", 00:04:03.589 "block_size": 512, 00:04:03.589 "num_blocks": 16384, 00:04:03.589 "uuid": "fad80398-9988-5a3a-801e-ccf0160c0683", 00:04:03.589 "assigned_rate_limits": { 00:04:03.589 "rw_ios_per_sec": 0, 00:04:03.589 "rw_mbytes_per_sec": 0, 00:04:03.589 "r_mbytes_per_sec": 0, 00:04:03.589 "w_mbytes_per_sec": 0 00:04:03.589 }, 00:04:03.589 "claimed": false, 00:04:03.589 "zoned": false, 00:04:03.589 "supported_io_types": { 00:04:03.589 "read": true, 00:04:03.589 "write": true, 00:04:03.589 "unmap": true, 00:04:03.589 "flush": true, 00:04:03.589 "reset": true, 00:04:03.589 "nvme_admin": false, 00:04:03.589 "nvme_io": false, 00:04:03.589 "nvme_io_md": false, 00:04:03.589 "write_zeroes": true, 00:04:03.589 "zcopy": true, 00:04:03.589 "get_zone_info": false, 00:04:03.589 "zone_management": false, 00:04:03.589 "zone_append": false, 00:04:03.589 "compare": false, 00:04:03.589 "compare_and_write": false, 00:04:03.589 "abort": true, 00:04:03.589 "seek_hole": false, 00:04:03.589 "seek_data": false, 00:04:03.589 "copy": true, 00:04:03.589 "nvme_iov_md": false 00:04:03.589 }, 00:04:03.589 "memory_domains": [ 00:04:03.589 { 00:04:03.589 "dma_device_id": "system", 00:04:03.589 "dma_device_type": 1 00:04:03.589 }, 00:04:03.589 { 00:04:03.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.589 "dma_device_type": 2 00:04:03.589 } 00:04:03.589 ], 00:04:03.589 "driver_specific": { 00:04:03.589 "passthru": { 00:04:03.589 "name": "Passthru0", 00:04:03.589 "base_bdev_name": "Malloc2" 00:04:03.589 } 00:04:03.589 } 00:04:03.589 } 00:04:03.589 ]' 00:04:03.589 17:33:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:03.589 17:33:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:03.589 17:33:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:03.589 17:33:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.589 17:33:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.589 17:33:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.589 17:33:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:03.589 17:33:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.589 17:33:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.589 17:33:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.589 17:33:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:03.589 17:33:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.589 17:33:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.589 17:33:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.589 17:33:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:03.589 17:33:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:03.589 17:33:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:03.589 00:04:03.589 real 0m0.226s 00:04:03.589 user 0m0.127s 00:04:03.589 sys 0m0.023s 00:04:03.589 17:33:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:03.589 17:33:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.589 ************************************ 00:04:03.589 END TEST rpc_daemon_integrity 00:04:03.589 ************************************ 00:04:03.847 17:33:53 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:03.847 17:33:53 rpc -- rpc/rpc.sh@84 -- # killprocess 57548 00:04:03.847 17:33:53 rpc -- common/autotest_common.sh@950 -- # '[' -z 57548 ']' 00:04:03.847 17:33:53 rpc -- common/autotest_common.sh@954 -- # kill -0 57548 00:04:03.847 17:33:53 rpc -- common/autotest_common.sh@955 -- # uname 00:04:03.847 17:33:53 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:03.847 17:33:53 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57548 00:04:03.847 17:33:53 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:03.847 killing process with pid 57548 00:04:03.847 17:33:53 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:03.847 17:33:53 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57548' 00:04:03.847 17:33:53 rpc -- common/autotest_common.sh@969 -- # kill 57548 00:04:03.847 17:33:53 rpc -- common/autotest_common.sh@974 -- # wait 57548 00:04:05.220 00:04:05.220 real 0m3.192s 00:04:05.220 user 0m3.611s 00:04:05.220 sys 0m0.575s 00:04:05.220 17:33:54 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:05.220 17:33:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.220 ************************************ 00:04:05.220 END TEST rpc 00:04:05.220 ************************************ 00:04:05.220 17:33:54 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:05.220 17:33:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:05.220 17:33:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.220 17:33:54 -- common/autotest_common.sh@10 -- # set +x 00:04:05.220 ************************************ 00:04:05.220 START TEST skip_rpc 00:04:05.220 ************************************ 00:04:05.220 17:33:54 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:05.220 * Looking for test storage... 00:04:05.220 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:05.220 17:33:54 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:05.220 17:33:54 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:05.220 17:33:54 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:05.220 17:33:54 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:05.220 17:33:54 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:05.220 17:33:54 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:05.220 17:33:54 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:05.220 17:33:54 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:05.220 17:33:54 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:05.220 17:33:54 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:05.220 17:33:54 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:05.220 17:33:54 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:05.221 17:33:54 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:05.221 17:33:54 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:05.221 17:33:54 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:05.221 17:33:54 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:05.221 17:33:54 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:05.221 17:33:54 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:05.221 17:33:54 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.221 17:33:54 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:05.221 17:33:54 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:05.221 17:33:54 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:05.221 17:33:54 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:05.221 17:33:54 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:05.221 17:33:54 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:05.221 17:33:54 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:05.221 17:33:54 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:05.221 17:33:54 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:05.221 17:33:54 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:05.221 17:33:54 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:05.221 17:33:54 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:05.221 17:33:54 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:05.221 17:33:54 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:05.221 17:33:54 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:05.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.221 --rc genhtml_branch_coverage=1 00:04:05.221 --rc genhtml_function_coverage=1 00:04:05.221 --rc genhtml_legend=1 00:04:05.221 --rc geninfo_all_blocks=1 00:04:05.221 --rc geninfo_unexecuted_blocks=1 00:04:05.221 00:04:05.221 ' 00:04:05.221 17:33:54 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:05.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.221 --rc genhtml_branch_coverage=1 00:04:05.221 --rc genhtml_function_coverage=1 00:04:05.221 --rc genhtml_legend=1 00:04:05.221 --rc geninfo_all_blocks=1 00:04:05.221 --rc geninfo_unexecuted_blocks=1 00:04:05.221 00:04:05.221 ' 00:04:05.221 17:33:54 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:05.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.221 --rc genhtml_branch_coverage=1 00:04:05.221 --rc genhtml_function_coverage=1 00:04:05.221 --rc genhtml_legend=1 00:04:05.221 --rc geninfo_all_blocks=1 00:04:05.221 --rc geninfo_unexecuted_blocks=1 00:04:05.221 00:04:05.221 ' 00:04:05.221 17:33:54 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:05.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.221 --rc genhtml_branch_coverage=1 00:04:05.221 --rc genhtml_function_coverage=1 00:04:05.221 --rc genhtml_legend=1 00:04:05.221 --rc geninfo_all_blocks=1 00:04:05.221 --rc geninfo_unexecuted_blocks=1 00:04:05.221 00:04:05.221 ' 00:04:05.221 17:33:54 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:05.221 17:33:54 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:05.221 17:33:54 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:05.221 17:33:54 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:05.221 17:33:54 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.221 17:33:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.221 ************************************ 00:04:05.221 START TEST skip_rpc 00:04:05.221 ************************************ 00:04:05.221 17:33:54 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:05.221 17:33:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57755 00:04:05.221 17:33:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:05.221 17:33:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:05.221 17:33:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:05.221 [2024-10-13 17:33:54.929748] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:04:05.221 [2024-10-13 17:33:54.929876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57755 ] 00:04:05.480 [2024-10-13 17:33:55.077974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.480 [2024-10-13 17:33:55.160663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.751 17:33:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:10.751 17:33:59 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:10.751 17:33:59 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:10.751 17:33:59 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:10.751 17:33:59 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:10.751 17:33:59 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:10.751 17:33:59 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:10.751 17:33:59 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:10.751 17:33:59 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.751 17:33:59 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.751 17:33:59 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:10.751 17:33:59 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:10.751 17:33:59 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:10.751 17:33:59 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:10.751 17:33:59 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:10.751 17:33:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:10.751 17:33:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57755 00:04:10.751 17:33:59 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 57755 ']' 00:04:10.751 17:33:59 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 57755 00:04:10.751 17:33:59 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:10.751 17:33:59 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:10.751 17:33:59 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57755 00:04:10.751 17:33:59 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:10.751 killing process with pid 57755 00:04:10.751 17:33:59 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:10.751 17:33:59 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57755' 00:04:10.751 17:33:59 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 57755 00:04:10.751 17:33:59 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 57755 00:04:11.318 00:04:11.318 real 0m6.192s 00:04:11.318 user 0m5.823s 00:04:11.318 sys 0m0.273s 00:04:11.318 17:34:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:11.318 ************************************ 00:04:11.318 END TEST skip_rpc 00:04:11.318 ************************************ 00:04:11.318 17:34:01 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.318 17:34:01 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:11.318 17:34:01 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:11.318 17:34:01 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:11.318 17:34:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.318 ************************************ 00:04:11.318 START TEST skip_rpc_with_json 00:04:11.318 ************************************ 00:04:11.318 17:34:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:11.318 17:34:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:11.318 17:34:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57848 00:04:11.318 17:34:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:11.318 17:34:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57848 00:04:11.318 17:34:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 57848 ']' 00:04:11.318 17:34:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.318 17:34:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:11.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.318 17:34:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.318 17:34:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:11.318 17:34:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:11.318 17:34:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:11.576 [2024-10-13 17:34:01.182324] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:04:11.576 [2024-10-13 17:34:01.182441] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57848 ] 00:04:11.576 [2024-10-13 17:34:01.328762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.834 [2024-10-13 17:34:01.424736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.402 17:34:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:12.402 17:34:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:12.402 17:34:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:12.402 17:34:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.402 17:34:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:12.402 [2024-10-13 17:34:02.000286] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:12.402 request: 00:04:12.402 { 00:04:12.402 "trtype": "tcp", 00:04:12.402 "method": "nvmf_get_transports", 00:04:12.402 "req_id": 1 00:04:12.402 } 00:04:12.402 Got JSON-RPC error response 00:04:12.402 response: 00:04:12.402 { 00:04:12.402 "code": -19, 00:04:12.402 "message": "No such device" 00:04:12.402 } 00:04:12.402 17:34:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:12.402 17:34:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:12.402 17:34:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.402 17:34:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:12.402 [2024-10-13 17:34:02.008381] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:12.402 17:34:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.402 17:34:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:12.402 17:34:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.402 17:34:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:12.402 17:34:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.402 17:34:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:12.402 { 00:04:12.402 "subsystems": [ 00:04:12.402 { 00:04:12.402 "subsystem": "fsdev", 00:04:12.402 "config": [ 00:04:12.402 { 00:04:12.402 "method": "fsdev_set_opts", 00:04:12.402 "params": { 00:04:12.402 "fsdev_io_pool_size": 65535, 00:04:12.402 "fsdev_io_cache_size": 256 00:04:12.402 } 00:04:12.402 } 00:04:12.402 ] 00:04:12.402 }, 00:04:12.402 { 00:04:12.402 "subsystem": "keyring", 00:04:12.402 "config": [] 00:04:12.402 }, 00:04:12.402 { 00:04:12.402 "subsystem": "iobuf", 00:04:12.402 "config": [ 00:04:12.402 { 00:04:12.402 "method": "iobuf_set_options", 00:04:12.402 "params": { 00:04:12.402 "small_pool_count": 8192, 00:04:12.402 "large_pool_count": 1024, 00:04:12.402 "small_bufsize": 8192, 00:04:12.402 "large_bufsize": 135168 00:04:12.402 } 00:04:12.402 } 00:04:12.402 ] 00:04:12.402 }, 00:04:12.402 { 00:04:12.402 "subsystem": "sock", 00:04:12.402 "config": [ 00:04:12.402 { 00:04:12.402 "method": "sock_set_default_impl", 00:04:12.402 "params": { 00:04:12.402 "impl_name": "posix" 00:04:12.402 } 00:04:12.402 }, 00:04:12.402 { 00:04:12.402 "method": "sock_impl_set_options", 00:04:12.402 "params": { 00:04:12.402 "impl_name": "ssl", 00:04:12.402 "recv_buf_size": 4096, 00:04:12.402 "send_buf_size": 4096, 00:04:12.402 "enable_recv_pipe": true, 00:04:12.402 "enable_quickack": false, 00:04:12.402 "enable_placement_id": 0, 00:04:12.402 "enable_zerocopy_send_server": true, 00:04:12.402 "enable_zerocopy_send_client": false, 00:04:12.402 "zerocopy_threshold": 0, 00:04:12.402 "tls_version": 0, 00:04:12.402 "enable_ktls": false 00:04:12.402 } 00:04:12.402 }, 00:04:12.402 { 00:04:12.402 "method": "sock_impl_set_options", 00:04:12.402 "params": { 00:04:12.402 "impl_name": "posix", 00:04:12.402 "recv_buf_size": 2097152, 00:04:12.402 "send_buf_size": 2097152, 00:04:12.402 "enable_recv_pipe": true, 00:04:12.402 "enable_quickack": false, 00:04:12.402 "enable_placement_id": 0, 00:04:12.402 "enable_zerocopy_send_server": true, 00:04:12.402 "enable_zerocopy_send_client": false, 00:04:12.402 "zerocopy_threshold": 0, 00:04:12.402 "tls_version": 0, 00:04:12.402 "enable_ktls": false 00:04:12.402 } 00:04:12.402 } 00:04:12.402 ] 00:04:12.402 }, 00:04:12.402 { 00:04:12.402 "subsystem": "vmd", 00:04:12.402 "config": [] 00:04:12.402 }, 00:04:12.402 { 00:04:12.402 "subsystem": "accel", 00:04:12.402 "config": [ 00:04:12.402 { 00:04:12.402 "method": "accel_set_options", 00:04:12.402 "params": { 00:04:12.402 "small_cache_size": 128, 00:04:12.402 "large_cache_size": 16, 00:04:12.402 "task_count": 2048, 00:04:12.402 "sequence_count": 2048, 00:04:12.402 "buf_count": 2048 00:04:12.402 } 00:04:12.402 } 00:04:12.402 ] 00:04:12.402 }, 00:04:12.402 { 00:04:12.402 "subsystem": "bdev", 00:04:12.402 "config": [ 00:04:12.402 { 00:04:12.402 "method": "bdev_set_options", 00:04:12.402 "params": { 00:04:12.402 "bdev_io_pool_size": 65535, 00:04:12.402 "bdev_io_cache_size": 256, 00:04:12.402 "bdev_auto_examine": true, 00:04:12.402 "iobuf_small_cache_size": 128, 00:04:12.402 "iobuf_large_cache_size": 16 00:04:12.402 } 00:04:12.402 }, 00:04:12.402 { 00:04:12.402 "method": "bdev_raid_set_options", 00:04:12.402 "params": { 00:04:12.402 "process_window_size_kb": 1024, 00:04:12.402 "process_max_bandwidth_mb_sec": 0 00:04:12.402 } 00:04:12.402 }, 00:04:12.402 { 00:04:12.402 "method": "bdev_iscsi_set_options", 00:04:12.402 "params": { 00:04:12.402 "timeout_sec": 30 00:04:12.402 } 00:04:12.402 }, 00:04:12.402 { 00:04:12.402 "method": "bdev_nvme_set_options", 00:04:12.402 "params": { 00:04:12.402 "action_on_timeout": "none", 00:04:12.402 "timeout_us": 0, 00:04:12.402 "timeout_admin_us": 0, 00:04:12.402 "keep_alive_timeout_ms": 10000, 00:04:12.402 "arbitration_burst": 0, 00:04:12.402 "low_priority_weight": 0, 00:04:12.402 "medium_priority_weight": 0, 00:04:12.402 "high_priority_weight": 0, 00:04:12.402 "nvme_adminq_poll_period_us": 10000, 00:04:12.402 "nvme_ioq_poll_period_us": 0, 00:04:12.402 "io_queue_requests": 0, 00:04:12.402 "delay_cmd_submit": true, 00:04:12.402 "transport_retry_count": 4, 00:04:12.402 "bdev_retry_count": 3, 00:04:12.402 "transport_ack_timeout": 0, 00:04:12.402 "ctrlr_loss_timeout_sec": 0, 00:04:12.402 "reconnect_delay_sec": 0, 00:04:12.402 "fast_io_fail_timeout_sec": 0, 00:04:12.402 "disable_auto_failback": false, 00:04:12.402 "generate_uuids": false, 00:04:12.402 "transport_tos": 0, 00:04:12.402 "nvme_error_stat": false, 00:04:12.402 "rdma_srq_size": 0, 00:04:12.402 "io_path_stat": false, 00:04:12.402 "allow_accel_sequence": false, 00:04:12.402 "rdma_max_cq_size": 0, 00:04:12.402 "rdma_cm_event_timeout_ms": 0, 00:04:12.402 "dhchap_digests": [ 00:04:12.402 "sha256", 00:04:12.402 "sha384", 00:04:12.402 "sha512" 00:04:12.402 ], 00:04:12.402 "dhchap_dhgroups": [ 00:04:12.402 "null", 00:04:12.402 "ffdhe2048", 00:04:12.402 "ffdhe3072", 00:04:12.402 "ffdhe4096", 00:04:12.402 "ffdhe6144", 00:04:12.402 "ffdhe8192" 00:04:12.402 ] 00:04:12.402 } 00:04:12.402 }, 00:04:12.402 { 00:04:12.402 "method": "bdev_nvme_set_hotplug", 00:04:12.402 "params": { 00:04:12.402 "period_us": 100000, 00:04:12.403 "enable": false 00:04:12.403 } 00:04:12.403 }, 00:04:12.403 { 00:04:12.403 "method": "bdev_wait_for_examine" 00:04:12.403 } 00:04:12.403 ] 00:04:12.403 }, 00:04:12.403 { 00:04:12.403 "subsystem": "scsi", 00:04:12.403 "config": null 00:04:12.403 }, 00:04:12.403 { 00:04:12.403 "subsystem": "scheduler", 00:04:12.403 "config": [ 00:04:12.403 { 00:04:12.403 "method": "framework_set_scheduler", 00:04:12.403 "params": { 00:04:12.403 "name": "static" 00:04:12.403 } 00:04:12.403 } 00:04:12.403 ] 00:04:12.403 }, 00:04:12.403 { 00:04:12.403 "subsystem": "vhost_scsi", 00:04:12.403 "config": [] 00:04:12.403 }, 00:04:12.403 { 00:04:12.403 "subsystem": "vhost_blk", 00:04:12.403 "config": [] 00:04:12.403 }, 00:04:12.403 { 00:04:12.403 "subsystem": "ublk", 00:04:12.403 "config": [] 00:04:12.403 }, 00:04:12.403 { 00:04:12.403 "subsystem": "nbd", 00:04:12.403 "config": [] 00:04:12.403 }, 00:04:12.403 { 00:04:12.403 "subsystem": "nvmf", 00:04:12.403 "config": [ 00:04:12.403 { 00:04:12.403 "method": "nvmf_set_config", 00:04:12.403 "params": { 00:04:12.403 "discovery_filter": "match_any", 00:04:12.403 "admin_cmd_passthru": { 00:04:12.403 "identify_ctrlr": false 00:04:12.403 }, 00:04:12.403 "dhchap_digests": [ 00:04:12.403 "sha256", 00:04:12.403 "sha384", 00:04:12.403 "sha512" 00:04:12.403 ], 00:04:12.403 "dhchap_dhgroups": [ 00:04:12.403 "null", 00:04:12.403 "ffdhe2048", 00:04:12.403 "ffdhe3072", 00:04:12.403 "ffdhe4096", 00:04:12.403 "ffdhe6144", 00:04:12.403 "ffdhe8192" 00:04:12.403 ] 00:04:12.403 } 00:04:12.403 }, 00:04:12.403 { 00:04:12.403 "method": "nvmf_set_max_subsystems", 00:04:12.403 "params": { 00:04:12.403 "max_subsystems": 1024 00:04:12.403 } 00:04:12.403 }, 00:04:12.403 { 00:04:12.403 "method": "nvmf_set_crdt", 00:04:12.403 "params": { 00:04:12.403 "crdt1": 0, 00:04:12.403 "crdt2": 0, 00:04:12.403 "crdt3": 0 00:04:12.403 } 00:04:12.403 }, 00:04:12.403 { 00:04:12.403 "method": "nvmf_create_transport", 00:04:12.403 "params": { 00:04:12.403 "trtype": "TCP", 00:04:12.403 "max_queue_depth": 128, 00:04:12.403 "max_io_qpairs_per_ctrlr": 127, 00:04:12.403 "in_capsule_data_size": 4096, 00:04:12.403 "max_io_size": 131072, 00:04:12.403 "io_unit_size": 131072, 00:04:12.403 "max_aq_depth": 128, 00:04:12.403 "num_shared_buffers": 511, 00:04:12.403 "buf_cache_size": 4294967295, 00:04:12.403 "dif_insert_or_strip": false, 00:04:12.403 "zcopy": false, 00:04:12.403 "c2h_success": true, 00:04:12.403 "sock_priority": 0, 00:04:12.403 "abort_timeout_sec": 1, 00:04:12.403 "ack_timeout": 0, 00:04:12.403 "data_wr_pool_size": 0 00:04:12.403 } 00:04:12.403 } 00:04:12.403 ] 00:04:12.403 }, 00:04:12.403 { 00:04:12.403 "subsystem": "iscsi", 00:04:12.403 "config": [ 00:04:12.403 { 00:04:12.403 "method": "iscsi_set_options", 00:04:12.403 "params": { 00:04:12.403 "node_base": "iqn.2016-06.io.spdk", 00:04:12.403 "max_sessions": 128, 00:04:12.403 "max_connections_per_session": 2, 00:04:12.403 "max_queue_depth": 64, 00:04:12.403 "default_time2wait": 2, 00:04:12.403 "default_time2retain": 20, 00:04:12.403 "first_burst_length": 8192, 00:04:12.403 "immediate_data": true, 00:04:12.403 "allow_duplicated_isid": false, 00:04:12.403 "error_recovery_level": 0, 00:04:12.403 "nop_timeout": 60, 00:04:12.403 "nop_in_interval": 30, 00:04:12.403 "disable_chap": false, 00:04:12.403 "require_chap": false, 00:04:12.403 "mutual_chap": false, 00:04:12.403 "chap_group": 0, 00:04:12.403 "max_large_datain_per_connection": 64, 00:04:12.403 "max_r2t_per_connection": 4, 00:04:12.403 "pdu_pool_size": 36864, 00:04:12.403 "immediate_data_pool_size": 16384, 00:04:12.403 "data_out_pool_size": 2048 00:04:12.403 } 00:04:12.403 } 00:04:12.403 ] 00:04:12.403 } 00:04:12.403 ] 00:04:12.403 } 00:04:12.403 17:34:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:12.403 17:34:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57848 00:04:12.403 17:34:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57848 ']' 00:04:12.403 17:34:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57848 00:04:12.403 17:34:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:12.403 17:34:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:12.403 17:34:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57848 00:04:12.403 17:34:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:12.403 killing process with pid 57848 00:04:12.403 17:34:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:12.403 17:34:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57848' 00:04:12.403 17:34:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57848 00:04:12.403 17:34:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57848 00:04:13.787 17:34:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57889 00:04:13.787 17:34:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:13.787 17:34:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:19.061 17:34:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57889 00:04:19.061 17:34:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57889 ']' 00:04:19.061 17:34:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57889 00:04:19.061 17:34:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:19.061 17:34:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:19.061 17:34:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57889 00:04:19.061 17:34:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:19.061 17:34:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:19.061 killing process with pid 57889 00:04:19.061 17:34:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57889' 00:04:19.061 17:34:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57889 00:04:19.061 17:34:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57889 00:04:20.436 17:34:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:20.436 17:34:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:20.436 00:04:20.436 real 0m8.755s 00:04:20.436 user 0m8.219s 00:04:20.436 sys 0m0.698s 00:04:20.436 17:34:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:20.436 ************************************ 00:04:20.436 END TEST skip_rpc_with_json 00:04:20.436 ************************************ 00:04:20.436 17:34:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:20.436 17:34:09 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:20.436 17:34:09 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:20.436 17:34:09 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:20.436 17:34:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.436 ************************************ 00:04:20.436 START TEST skip_rpc_with_delay 00:04:20.436 ************************************ 00:04:20.436 17:34:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:20.436 17:34:09 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:20.436 17:34:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:20.436 17:34:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:20.436 17:34:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:20.436 17:34:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:20.436 17:34:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:20.436 17:34:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:20.436 17:34:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:20.436 17:34:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:20.436 17:34:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:20.436 17:34:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:20.436 17:34:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:20.436 [2024-10-13 17:34:10.004873] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:20.436 17:34:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:20.436 17:34:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:20.436 17:34:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:20.436 17:34:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:20.436 00:04:20.436 real 0m0.136s 00:04:20.436 user 0m0.063s 00:04:20.436 sys 0m0.071s 00:04:20.437 17:34:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:20.437 ************************************ 00:04:20.437 END TEST skip_rpc_with_delay 00:04:20.437 17:34:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:20.437 ************************************ 00:04:20.437 17:34:10 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:20.437 17:34:10 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:20.437 17:34:10 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:20.437 17:34:10 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:20.437 17:34:10 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:20.437 17:34:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.437 ************************************ 00:04:20.437 START TEST exit_on_failed_rpc_init 00:04:20.437 ************************************ 00:04:20.437 17:34:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:20.437 17:34:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58010 00:04:20.437 17:34:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58010 00:04:20.437 17:34:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:20.437 17:34:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 58010 ']' 00:04:20.437 17:34:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.437 17:34:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:20.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.437 17:34:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.437 17:34:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:20.437 17:34:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:20.437 [2024-10-13 17:34:10.201814] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:04:20.437 [2024-10-13 17:34:10.201956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58010 ] 00:04:20.695 [2024-10-13 17:34:10.352785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.695 [2024-10-13 17:34:10.450331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.259 17:34:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:21.259 17:34:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:21.259 17:34:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:21.259 17:34:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:21.259 17:34:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:21.259 17:34:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:21.259 17:34:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:21.259 17:34:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:21.259 17:34:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:21.259 17:34:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:21.259 17:34:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:21.259 17:34:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:21.259 17:34:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:21.259 17:34:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:21.259 17:34:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:21.517 [2024-10-13 17:34:11.113617] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:04:21.517 [2024-10-13 17:34:11.113751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58028 ] 00:04:21.517 [2024-10-13 17:34:11.262868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.816 [2024-10-13 17:34:11.370254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:21.816 [2024-10-13 17:34:11.370337] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:21.816 [2024-10-13 17:34:11.370355] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:21.816 [2024-10-13 17:34:11.370372] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:21.816 17:34:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:21.816 17:34:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:21.816 17:34:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:21.816 17:34:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:21.816 17:34:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:21.816 17:34:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:21.816 17:34:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:21.816 17:34:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58010 00:04:21.816 17:34:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 58010 ']' 00:04:21.816 17:34:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 58010 00:04:21.816 17:34:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:21.816 17:34:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:21.816 17:34:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58010 00:04:21.816 17:34:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:21.816 17:34:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:21.816 17:34:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58010' 00:04:21.816 killing process with pid 58010 00:04:21.816 17:34:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 58010 00:04:21.816 17:34:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 58010 00:04:23.223 00:04:23.223 real 0m2.695s 00:04:23.223 user 0m2.995s 00:04:23.223 sys 0m0.425s 00:04:23.223 17:34:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:23.223 ************************************ 00:04:23.223 END TEST exit_on_failed_rpc_init 00:04:23.223 ************************************ 00:04:23.223 17:34:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:23.223 17:34:12 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:23.223 00:04:23.223 real 0m18.160s 00:04:23.223 user 0m17.247s 00:04:23.223 sys 0m1.650s 00:04:23.223 17:34:12 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:23.223 17:34:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.223 ************************************ 00:04:23.223 END TEST skip_rpc 00:04:23.223 ************************************ 00:04:23.223 17:34:12 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:23.223 17:34:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:23.223 17:34:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:23.223 17:34:12 -- common/autotest_common.sh@10 -- # set +x 00:04:23.223 ************************************ 00:04:23.223 START TEST rpc_client 00:04:23.223 ************************************ 00:04:23.223 17:34:12 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:23.223 * Looking for test storage... 00:04:23.223 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:23.223 17:34:12 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:23.223 17:34:12 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:23.223 17:34:12 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:23.223 17:34:13 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:23.223 17:34:13 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.223 17:34:13 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.223 17:34:13 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.223 17:34:13 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.223 17:34:13 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.223 17:34:13 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.223 17:34:13 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.223 17:34:13 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.223 17:34:13 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.223 17:34:13 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.223 17:34:13 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.223 17:34:13 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:23.223 17:34:13 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:23.223 17:34:13 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.223 17:34:13 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.223 17:34:13 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:23.223 17:34:13 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:23.223 17:34:13 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.223 17:34:13 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:23.223 17:34:13 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.223 17:34:13 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:23.483 17:34:13 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:23.483 17:34:13 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.483 17:34:13 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:23.483 17:34:13 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.483 17:34:13 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.483 17:34:13 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.483 17:34:13 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:23.483 17:34:13 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.483 17:34:13 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:23.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.483 --rc genhtml_branch_coverage=1 00:04:23.483 --rc genhtml_function_coverage=1 00:04:23.483 --rc genhtml_legend=1 00:04:23.483 --rc geninfo_all_blocks=1 00:04:23.483 --rc geninfo_unexecuted_blocks=1 00:04:23.483 00:04:23.483 ' 00:04:23.483 17:34:13 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:23.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.483 --rc genhtml_branch_coverage=1 00:04:23.483 --rc genhtml_function_coverage=1 00:04:23.483 --rc genhtml_legend=1 00:04:23.483 --rc geninfo_all_blocks=1 00:04:23.483 --rc geninfo_unexecuted_blocks=1 00:04:23.483 00:04:23.483 ' 00:04:23.483 17:34:13 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:23.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.483 --rc genhtml_branch_coverage=1 00:04:23.483 --rc genhtml_function_coverage=1 00:04:23.483 --rc genhtml_legend=1 00:04:23.483 --rc geninfo_all_blocks=1 00:04:23.483 --rc geninfo_unexecuted_blocks=1 00:04:23.483 00:04:23.483 ' 00:04:23.483 17:34:13 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:23.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.483 --rc genhtml_branch_coverage=1 00:04:23.483 --rc genhtml_function_coverage=1 00:04:23.483 --rc genhtml_legend=1 00:04:23.483 --rc geninfo_all_blocks=1 00:04:23.483 --rc geninfo_unexecuted_blocks=1 00:04:23.483 00:04:23.483 ' 00:04:23.483 17:34:13 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:23.483 OK 00:04:23.483 17:34:13 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:23.483 00:04:23.483 real 0m0.205s 00:04:23.483 user 0m0.113s 00:04:23.483 sys 0m0.098s 00:04:23.483 17:34:13 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:23.483 17:34:13 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:23.483 ************************************ 00:04:23.483 END TEST rpc_client 00:04:23.483 ************************************ 00:04:23.483 17:34:13 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:23.483 17:34:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:23.483 17:34:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:23.483 17:34:13 -- common/autotest_common.sh@10 -- # set +x 00:04:23.483 ************************************ 00:04:23.483 START TEST json_config 00:04:23.483 ************************************ 00:04:23.483 17:34:13 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:23.483 17:34:13 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:23.483 17:34:13 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:23.483 17:34:13 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:23.483 17:34:13 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:23.483 17:34:13 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.483 17:34:13 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.483 17:34:13 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.483 17:34:13 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.483 17:34:13 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.483 17:34:13 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.483 17:34:13 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.483 17:34:13 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.483 17:34:13 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.483 17:34:13 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.483 17:34:13 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.483 17:34:13 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:23.483 17:34:13 json_config -- scripts/common.sh@345 -- # : 1 00:04:23.483 17:34:13 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.483 17:34:13 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.483 17:34:13 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:23.483 17:34:13 json_config -- scripts/common.sh@353 -- # local d=1 00:04:23.483 17:34:13 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.483 17:34:13 json_config -- scripts/common.sh@355 -- # echo 1 00:04:23.483 17:34:13 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.483 17:34:13 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:23.483 17:34:13 json_config -- scripts/common.sh@353 -- # local d=2 00:04:23.483 17:34:13 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.483 17:34:13 json_config -- scripts/common.sh@355 -- # echo 2 00:04:23.483 17:34:13 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.483 17:34:13 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.483 17:34:13 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.483 17:34:13 json_config -- scripts/common.sh@368 -- # return 0 00:04:23.483 17:34:13 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.483 17:34:13 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:23.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.483 --rc genhtml_branch_coverage=1 00:04:23.483 --rc genhtml_function_coverage=1 00:04:23.483 --rc genhtml_legend=1 00:04:23.483 --rc geninfo_all_blocks=1 00:04:23.483 --rc geninfo_unexecuted_blocks=1 00:04:23.483 00:04:23.483 ' 00:04:23.483 17:34:13 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:23.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.483 --rc genhtml_branch_coverage=1 00:04:23.483 --rc genhtml_function_coverage=1 00:04:23.483 --rc genhtml_legend=1 00:04:23.483 --rc geninfo_all_blocks=1 00:04:23.483 --rc geninfo_unexecuted_blocks=1 00:04:23.483 00:04:23.483 ' 00:04:23.483 17:34:13 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:23.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.483 --rc genhtml_branch_coverage=1 00:04:23.483 --rc genhtml_function_coverage=1 00:04:23.483 --rc genhtml_legend=1 00:04:23.483 --rc geninfo_all_blocks=1 00:04:23.483 --rc geninfo_unexecuted_blocks=1 00:04:23.483 00:04:23.483 ' 00:04:23.483 17:34:13 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:23.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.483 --rc genhtml_branch_coverage=1 00:04:23.483 --rc genhtml_function_coverage=1 00:04:23.483 --rc genhtml_legend=1 00:04:23.483 --rc geninfo_all_blocks=1 00:04:23.483 --rc geninfo_unexecuted_blocks=1 00:04:23.483 00:04:23.483 ' 00:04:23.483 17:34:13 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:23.483 17:34:13 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:23.483 17:34:13 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:23.483 17:34:13 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:23.483 17:34:13 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:23.483 17:34:13 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:23.483 17:34:13 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:23.483 17:34:13 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:23.483 17:34:13 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:23.483 17:34:13 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:23.483 17:34:13 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:23.483 17:34:13 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:23.483 17:34:13 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a1eab665-c6b7-4d65-afbe-25a7f69307b9 00:04:23.483 17:34:13 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=a1eab665-c6b7-4d65-afbe-25a7f69307b9 00:04:23.483 17:34:13 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:23.483 17:34:13 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:23.483 17:34:13 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:23.483 17:34:13 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:23.483 17:34:13 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:23.483 17:34:13 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:23.483 17:34:13 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:23.483 17:34:13 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:23.483 17:34:13 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:23.483 17:34:13 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.484 17:34:13 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.484 17:34:13 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.484 17:34:13 json_config -- paths/export.sh@5 -- # export PATH 00:04:23.484 17:34:13 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.484 17:34:13 json_config -- nvmf/common.sh@51 -- # : 0 00:04:23.484 17:34:13 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:23.484 17:34:13 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:23.484 17:34:13 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:23.484 17:34:13 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:23.484 17:34:13 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:23.484 17:34:13 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:23.484 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:23.484 17:34:13 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:23.484 17:34:13 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:23.484 17:34:13 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:23.484 17:34:13 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:23.484 17:34:13 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:23.484 17:34:13 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:23.484 17:34:13 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:23.484 17:34:13 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:23.484 WARNING: No tests are enabled so not running JSON configuration tests 00:04:23.484 17:34:13 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:23.484 17:34:13 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:23.484 00:04:23.484 real 0m0.149s 00:04:23.484 user 0m0.093s 00:04:23.484 sys 0m0.052s 00:04:23.484 17:34:13 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:23.484 17:34:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.484 ************************************ 00:04:23.484 END TEST json_config 00:04:23.484 ************************************ 00:04:23.743 17:34:13 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:23.743 17:34:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:23.743 17:34:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:23.743 17:34:13 -- common/autotest_common.sh@10 -- # set +x 00:04:23.743 ************************************ 00:04:23.743 START TEST json_config_extra_key 00:04:23.743 ************************************ 00:04:23.743 17:34:13 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:23.743 17:34:13 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:23.743 17:34:13 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:23.743 17:34:13 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:23.743 17:34:13 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:23.743 17:34:13 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.743 17:34:13 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.743 17:34:13 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.743 17:34:13 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.743 17:34:13 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.743 17:34:13 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.743 17:34:13 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.743 17:34:13 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.743 17:34:13 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.743 17:34:13 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.743 17:34:13 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.743 17:34:13 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:23.743 17:34:13 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:23.743 17:34:13 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.743 17:34:13 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.743 17:34:13 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:23.743 17:34:13 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:23.743 17:34:13 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.743 17:34:13 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:23.743 17:34:13 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.743 17:34:13 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:23.743 17:34:13 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:23.743 17:34:13 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.743 17:34:13 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:23.743 17:34:13 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.743 17:34:13 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.743 17:34:13 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.743 17:34:13 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:23.743 17:34:13 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.743 17:34:13 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:23.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.743 --rc genhtml_branch_coverage=1 00:04:23.743 --rc genhtml_function_coverage=1 00:04:23.743 --rc genhtml_legend=1 00:04:23.743 --rc geninfo_all_blocks=1 00:04:23.743 --rc geninfo_unexecuted_blocks=1 00:04:23.743 00:04:23.743 ' 00:04:23.743 17:34:13 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:23.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.743 --rc genhtml_branch_coverage=1 00:04:23.743 --rc genhtml_function_coverage=1 00:04:23.743 --rc genhtml_legend=1 00:04:23.743 --rc geninfo_all_blocks=1 00:04:23.743 --rc geninfo_unexecuted_blocks=1 00:04:23.743 00:04:23.743 ' 00:04:23.743 17:34:13 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:23.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.743 --rc genhtml_branch_coverage=1 00:04:23.743 --rc genhtml_function_coverage=1 00:04:23.743 --rc genhtml_legend=1 00:04:23.743 --rc geninfo_all_blocks=1 00:04:23.743 --rc geninfo_unexecuted_blocks=1 00:04:23.743 00:04:23.743 ' 00:04:23.743 17:34:13 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:23.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.743 --rc genhtml_branch_coverage=1 00:04:23.743 --rc genhtml_function_coverage=1 00:04:23.743 --rc genhtml_legend=1 00:04:23.743 --rc geninfo_all_blocks=1 00:04:23.743 --rc geninfo_unexecuted_blocks=1 00:04:23.743 00:04:23.743 ' 00:04:23.743 17:34:13 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:23.743 17:34:13 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:23.744 17:34:13 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:23.744 17:34:13 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:23.744 17:34:13 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:23.744 17:34:13 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:23.744 17:34:13 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:23.744 17:34:13 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:23.744 17:34:13 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:23.744 17:34:13 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:23.744 17:34:13 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:23.744 17:34:13 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:23.744 17:34:13 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a1eab665-c6b7-4d65-afbe-25a7f69307b9 00:04:23.744 17:34:13 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=a1eab665-c6b7-4d65-afbe-25a7f69307b9 00:04:23.744 17:34:13 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:23.744 17:34:13 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:23.744 17:34:13 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:23.744 17:34:13 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:23.744 17:34:13 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:23.744 17:34:13 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:23.744 17:34:13 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:23.744 17:34:13 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:23.744 17:34:13 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:23.744 17:34:13 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.744 17:34:13 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.744 17:34:13 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.744 17:34:13 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:23.744 17:34:13 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.744 17:34:13 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:23.744 17:34:13 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:23.744 17:34:13 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:23.744 17:34:13 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:23.744 17:34:13 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:23.744 17:34:13 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:23.744 17:34:13 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:23.744 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:23.744 17:34:13 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:23.744 17:34:13 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:23.744 17:34:13 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:23.744 17:34:13 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:23.744 17:34:13 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:23.744 17:34:13 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:23.744 17:34:13 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:23.744 17:34:13 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:23.744 17:34:13 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:23.744 17:34:13 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:23.744 17:34:13 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:23.744 17:34:13 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:23.744 17:34:13 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:23.744 INFO: launching applications... 00:04:23.744 17:34:13 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:23.744 17:34:13 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:23.744 17:34:13 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:23.744 17:34:13 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:23.744 17:34:13 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:23.744 17:34:13 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:23.744 17:34:13 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:23.744 17:34:13 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:23.744 17:34:13 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:23.744 17:34:13 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58222 00:04:23.744 17:34:13 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:23.744 Waiting for target to run... 00:04:23.744 17:34:13 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:23.744 17:34:13 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58222 /var/tmp/spdk_tgt.sock 00:04:23.744 17:34:13 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 58222 ']' 00:04:23.744 17:34:13 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:23.744 17:34:13 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:23.744 17:34:13 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:23.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:23.744 17:34:13 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:23.744 17:34:13 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:23.744 [2024-10-13 17:34:13.540011] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:04:23.744 [2024-10-13 17:34:13.540337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58222 ] 00:04:24.310 [2024-10-13 17:34:13.898188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.310 [2024-10-13 17:34:13.992107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.877 17:34:14 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:24.877 17:34:14 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:24.877 17:34:14 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:24.877 00:04:24.877 INFO: shutting down applications... 00:04:24.877 17:34:14 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:24.877 17:34:14 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:24.877 17:34:14 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:24.877 17:34:14 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:24.877 17:34:14 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58222 ]] 00:04:24.877 17:34:14 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58222 00:04:24.877 17:34:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:24.877 17:34:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:24.877 17:34:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58222 00:04:24.877 17:34:14 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:25.443 17:34:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:25.443 17:34:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:25.443 17:34:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58222 00:04:25.443 17:34:14 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:25.702 17:34:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:25.702 17:34:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:25.702 17:34:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58222 00:04:25.702 17:34:15 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:26.271 17:34:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:26.271 17:34:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:26.271 17:34:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58222 00:04:26.271 17:34:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:26.891 SPDK target shutdown done 00:04:26.891 Success 00:04:26.891 17:34:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:26.891 17:34:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:26.891 17:34:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58222 00:04:26.891 17:34:16 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:26.891 17:34:16 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:26.891 17:34:16 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:26.891 17:34:16 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:26.891 17:34:16 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:26.891 00:04:26.891 real 0m3.187s 00:04:26.891 user 0m2.622s 00:04:26.891 sys 0m0.438s 00:04:26.891 ************************************ 00:04:26.891 END TEST json_config_extra_key 00:04:26.891 ************************************ 00:04:26.891 17:34:16 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:26.891 17:34:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:26.891 17:34:16 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:26.891 17:34:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:26.891 17:34:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:26.891 17:34:16 -- common/autotest_common.sh@10 -- # set +x 00:04:26.891 ************************************ 00:04:26.891 START TEST alias_rpc 00:04:26.891 ************************************ 00:04:26.891 17:34:16 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:26.891 * Looking for test storage... 00:04:26.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:26.891 17:34:16 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:26.891 17:34:16 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:26.891 17:34:16 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:27.176 17:34:16 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:27.176 17:34:16 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:27.176 17:34:16 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:27.176 17:34:16 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:27.176 17:34:16 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.176 17:34:16 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:27.176 17:34:16 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:27.176 17:34:16 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:27.176 17:34:16 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:27.176 17:34:16 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:27.176 17:34:16 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:27.176 17:34:16 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:27.176 17:34:16 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:27.176 17:34:16 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:27.176 17:34:16 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:27.176 17:34:16 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.176 17:34:16 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:27.176 17:34:16 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:27.176 17:34:16 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.176 17:34:16 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:27.176 17:34:16 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:27.176 17:34:16 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:27.176 17:34:16 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:27.176 17:34:16 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.176 17:34:16 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:27.176 17:34:16 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:27.176 17:34:16 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:27.176 17:34:16 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:27.176 17:34:16 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:27.176 17:34:16 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.176 17:34:16 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:27.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.176 --rc genhtml_branch_coverage=1 00:04:27.176 --rc genhtml_function_coverage=1 00:04:27.176 --rc genhtml_legend=1 00:04:27.176 --rc geninfo_all_blocks=1 00:04:27.176 --rc geninfo_unexecuted_blocks=1 00:04:27.176 00:04:27.176 ' 00:04:27.176 17:34:16 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:27.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.176 --rc genhtml_branch_coverage=1 00:04:27.176 --rc genhtml_function_coverage=1 00:04:27.176 --rc genhtml_legend=1 00:04:27.176 --rc geninfo_all_blocks=1 00:04:27.176 --rc geninfo_unexecuted_blocks=1 00:04:27.176 00:04:27.176 ' 00:04:27.176 17:34:16 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:27.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.176 --rc genhtml_branch_coverage=1 00:04:27.176 --rc genhtml_function_coverage=1 00:04:27.176 --rc genhtml_legend=1 00:04:27.176 --rc geninfo_all_blocks=1 00:04:27.176 --rc geninfo_unexecuted_blocks=1 00:04:27.176 00:04:27.176 ' 00:04:27.176 17:34:16 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:27.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.176 --rc genhtml_branch_coverage=1 00:04:27.176 --rc genhtml_function_coverage=1 00:04:27.176 --rc genhtml_legend=1 00:04:27.176 --rc geninfo_all_blocks=1 00:04:27.176 --rc geninfo_unexecuted_blocks=1 00:04:27.176 00:04:27.176 ' 00:04:27.176 17:34:16 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:27.176 17:34:16 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58315 00:04:27.176 17:34:16 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58315 00:04:27.176 17:34:16 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 58315 ']' 00:04:27.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.177 17:34:16 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.177 17:34:16 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:27.177 17:34:16 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.177 17:34:16 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:27.177 17:34:16 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.177 17:34:16 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:27.177 [2024-10-13 17:34:16.802930] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:04:27.177 [2024-10-13 17:34:16.803056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58315 ] 00:04:27.177 [2024-10-13 17:34:16.956599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.437 [2024-10-13 17:34:17.072109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.012 17:34:17 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:28.012 17:34:17 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:28.012 17:34:17 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:28.281 17:34:17 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58315 00:04:28.281 17:34:17 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 58315 ']' 00:04:28.281 17:34:17 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 58315 00:04:28.281 17:34:17 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:28.281 17:34:17 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:28.281 17:34:17 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58315 00:04:28.281 killing process with pid 58315 00:04:28.281 17:34:18 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:28.281 17:34:18 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:28.282 17:34:18 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58315' 00:04:28.282 17:34:18 alias_rpc -- common/autotest_common.sh@969 -- # kill 58315 00:04:28.282 17:34:18 alias_rpc -- common/autotest_common.sh@974 -- # wait 58315 00:04:29.659 ************************************ 00:04:29.659 END TEST alias_rpc 00:04:29.659 ************************************ 00:04:29.659 00:04:29.659 real 0m2.793s 00:04:29.659 user 0m2.856s 00:04:29.659 sys 0m0.485s 00:04:29.659 17:34:19 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.659 17:34:19 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.659 17:34:19 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:29.659 17:34:19 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:29.659 17:34:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:29.659 17:34:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:29.659 17:34:19 -- common/autotest_common.sh@10 -- # set +x 00:04:29.659 ************************************ 00:04:29.659 START TEST spdkcli_tcp 00:04:29.659 ************************************ 00:04:29.659 17:34:19 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:29.659 * Looking for test storage... 00:04:29.659 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:29.659 17:34:19 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:29.659 17:34:19 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:29.659 17:34:19 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:29.917 17:34:19 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:29.917 17:34:19 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.917 17:34:19 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.917 17:34:19 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.917 17:34:19 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.917 17:34:19 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.917 17:34:19 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.917 17:34:19 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.917 17:34:19 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.917 17:34:19 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.917 17:34:19 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.917 17:34:19 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.917 17:34:19 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:29.917 17:34:19 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:29.917 17:34:19 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.917 17:34:19 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.917 17:34:19 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:29.917 17:34:19 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:29.917 17:34:19 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.917 17:34:19 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:29.917 17:34:19 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.917 17:34:19 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:29.917 17:34:19 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:29.917 17:34:19 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.917 17:34:19 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:29.917 17:34:19 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.917 17:34:19 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.917 17:34:19 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.917 17:34:19 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:29.917 17:34:19 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.917 17:34:19 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:29.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.917 --rc genhtml_branch_coverage=1 00:04:29.917 --rc genhtml_function_coverage=1 00:04:29.917 --rc genhtml_legend=1 00:04:29.917 --rc geninfo_all_blocks=1 00:04:29.917 --rc geninfo_unexecuted_blocks=1 00:04:29.917 00:04:29.917 ' 00:04:29.917 17:34:19 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:29.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.917 --rc genhtml_branch_coverage=1 00:04:29.917 --rc genhtml_function_coverage=1 00:04:29.917 --rc genhtml_legend=1 00:04:29.917 --rc geninfo_all_blocks=1 00:04:29.917 --rc geninfo_unexecuted_blocks=1 00:04:29.917 00:04:29.917 ' 00:04:29.917 17:34:19 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:29.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.917 --rc genhtml_branch_coverage=1 00:04:29.917 --rc genhtml_function_coverage=1 00:04:29.917 --rc genhtml_legend=1 00:04:29.917 --rc geninfo_all_blocks=1 00:04:29.917 --rc geninfo_unexecuted_blocks=1 00:04:29.917 00:04:29.917 ' 00:04:29.917 17:34:19 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:29.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.917 --rc genhtml_branch_coverage=1 00:04:29.917 --rc genhtml_function_coverage=1 00:04:29.917 --rc genhtml_legend=1 00:04:29.917 --rc geninfo_all_blocks=1 00:04:29.917 --rc geninfo_unexecuted_blocks=1 00:04:29.917 00:04:29.917 ' 00:04:29.918 17:34:19 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:29.918 17:34:19 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:29.918 17:34:19 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:29.918 17:34:19 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:29.918 17:34:19 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:29.918 17:34:19 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:29.918 17:34:19 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:29.918 17:34:19 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:29.918 17:34:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:29.918 17:34:19 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58405 00:04:29.918 17:34:19 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58405 00:04:29.918 17:34:19 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:29.918 17:34:19 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 58405 ']' 00:04:29.918 17:34:19 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.918 17:34:19 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:29.918 17:34:19 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.918 17:34:19 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:29.918 17:34:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:29.918 [2024-10-13 17:34:19.617128] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:04:29.918 [2024-10-13 17:34:19.617364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58405 ] 00:04:30.176 [2024-10-13 17:34:19.766283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:30.176 [2024-10-13 17:34:19.864125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:30.176 [2024-10-13 17:34:19.864183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.742 17:34:20 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:30.742 17:34:20 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:30.742 17:34:20 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:30.742 17:34:20 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58422 00:04:30.742 17:34:20 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:31.001 [ 00:04:31.001 "bdev_malloc_delete", 00:04:31.001 "bdev_malloc_create", 00:04:31.001 "bdev_null_resize", 00:04:31.001 "bdev_null_delete", 00:04:31.001 "bdev_null_create", 00:04:31.001 "bdev_nvme_cuse_unregister", 00:04:31.001 "bdev_nvme_cuse_register", 00:04:31.001 "bdev_opal_new_user", 00:04:31.001 "bdev_opal_set_lock_state", 00:04:31.001 "bdev_opal_delete", 00:04:31.001 "bdev_opal_get_info", 00:04:31.001 "bdev_opal_create", 00:04:31.001 "bdev_nvme_opal_revert", 00:04:31.001 "bdev_nvme_opal_init", 00:04:31.001 "bdev_nvme_send_cmd", 00:04:31.001 "bdev_nvme_set_keys", 00:04:31.001 "bdev_nvme_get_path_iostat", 00:04:31.001 "bdev_nvme_get_mdns_discovery_info", 00:04:31.001 "bdev_nvme_stop_mdns_discovery", 00:04:31.001 "bdev_nvme_start_mdns_discovery", 00:04:31.001 "bdev_nvme_set_multipath_policy", 00:04:31.001 "bdev_nvme_set_preferred_path", 00:04:31.001 "bdev_nvme_get_io_paths", 00:04:31.001 "bdev_nvme_remove_error_injection", 00:04:31.001 "bdev_nvme_add_error_injection", 00:04:31.001 "bdev_nvme_get_discovery_info", 00:04:31.001 "bdev_nvme_stop_discovery", 00:04:31.001 "bdev_nvme_start_discovery", 00:04:31.001 "bdev_nvme_get_controller_health_info", 00:04:31.001 "bdev_nvme_disable_controller", 00:04:31.001 "bdev_nvme_enable_controller", 00:04:31.001 "bdev_nvme_reset_controller", 00:04:31.001 "bdev_nvme_get_transport_statistics", 00:04:31.001 "bdev_nvme_apply_firmware", 00:04:31.001 "bdev_nvme_detach_controller", 00:04:31.001 "bdev_nvme_get_controllers", 00:04:31.001 "bdev_nvme_attach_controller", 00:04:31.001 "bdev_nvme_set_hotplug", 00:04:31.001 "bdev_nvme_set_options", 00:04:31.001 "bdev_passthru_delete", 00:04:31.001 "bdev_passthru_create", 00:04:31.001 "bdev_lvol_set_parent_bdev", 00:04:31.001 "bdev_lvol_set_parent", 00:04:31.001 "bdev_lvol_check_shallow_copy", 00:04:31.001 "bdev_lvol_start_shallow_copy", 00:04:31.001 "bdev_lvol_grow_lvstore", 00:04:31.001 "bdev_lvol_get_lvols", 00:04:31.001 "bdev_lvol_get_lvstores", 00:04:31.001 "bdev_lvol_delete", 00:04:31.001 "bdev_lvol_set_read_only", 00:04:31.001 "bdev_lvol_resize", 00:04:31.001 "bdev_lvol_decouple_parent", 00:04:31.001 "bdev_lvol_inflate", 00:04:31.001 "bdev_lvol_rename", 00:04:31.001 "bdev_lvol_clone_bdev", 00:04:31.001 "bdev_lvol_clone", 00:04:31.001 "bdev_lvol_snapshot", 00:04:31.001 "bdev_lvol_create", 00:04:31.001 "bdev_lvol_delete_lvstore", 00:04:31.001 "bdev_lvol_rename_lvstore", 00:04:31.001 "bdev_lvol_create_lvstore", 00:04:31.001 "bdev_raid_set_options", 00:04:31.001 "bdev_raid_remove_base_bdev", 00:04:31.001 "bdev_raid_add_base_bdev", 00:04:31.001 "bdev_raid_delete", 00:04:31.001 "bdev_raid_create", 00:04:31.001 "bdev_raid_get_bdevs", 00:04:31.001 "bdev_error_inject_error", 00:04:31.001 "bdev_error_delete", 00:04:31.001 "bdev_error_create", 00:04:31.001 "bdev_split_delete", 00:04:31.001 "bdev_split_create", 00:04:31.001 "bdev_delay_delete", 00:04:31.001 "bdev_delay_create", 00:04:31.001 "bdev_delay_update_latency", 00:04:31.002 "bdev_zone_block_delete", 00:04:31.002 "bdev_zone_block_create", 00:04:31.002 "blobfs_create", 00:04:31.002 "blobfs_detect", 00:04:31.002 "blobfs_set_cache_size", 00:04:31.002 "bdev_xnvme_delete", 00:04:31.002 "bdev_xnvme_create", 00:04:31.002 "bdev_aio_delete", 00:04:31.002 "bdev_aio_rescan", 00:04:31.002 "bdev_aio_create", 00:04:31.002 "bdev_ftl_set_property", 00:04:31.002 "bdev_ftl_get_properties", 00:04:31.002 "bdev_ftl_get_stats", 00:04:31.002 "bdev_ftl_unmap", 00:04:31.002 "bdev_ftl_unload", 00:04:31.002 "bdev_ftl_delete", 00:04:31.002 "bdev_ftl_load", 00:04:31.002 "bdev_ftl_create", 00:04:31.002 "bdev_virtio_attach_controller", 00:04:31.002 "bdev_virtio_scsi_get_devices", 00:04:31.002 "bdev_virtio_detach_controller", 00:04:31.002 "bdev_virtio_blk_set_hotplug", 00:04:31.002 "bdev_iscsi_delete", 00:04:31.002 "bdev_iscsi_create", 00:04:31.002 "bdev_iscsi_set_options", 00:04:31.002 "accel_error_inject_error", 00:04:31.002 "ioat_scan_accel_module", 00:04:31.002 "dsa_scan_accel_module", 00:04:31.002 "iaa_scan_accel_module", 00:04:31.002 "keyring_file_remove_key", 00:04:31.002 "keyring_file_add_key", 00:04:31.002 "keyring_linux_set_options", 00:04:31.002 "fsdev_aio_delete", 00:04:31.002 "fsdev_aio_create", 00:04:31.002 "iscsi_get_histogram", 00:04:31.002 "iscsi_enable_histogram", 00:04:31.002 "iscsi_set_options", 00:04:31.002 "iscsi_get_auth_groups", 00:04:31.002 "iscsi_auth_group_remove_secret", 00:04:31.002 "iscsi_auth_group_add_secret", 00:04:31.002 "iscsi_delete_auth_group", 00:04:31.002 "iscsi_create_auth_group", 00:04:31.002 "iscsi_set_discovery_auth", 00:04:31.002 "iscsi_get_options", 00:04:31.002 "iscsi_target_node_request_logout", 00:04:31.002 "iscsi_target_node_set_redirect", 00:04:31.002 "iscsi_target_node_set_auth", 00:04:31.002 "iscsi_target_node_add_lun", 00:04:31.002 "iscsi_get_stats", 00:04:31.002 "iscsi_get_connections", 00:04:31.002 "iscsi_portal_group_set_auth", 00:04:31.002 "iscsi_start_portal_group", 00:04:31.002 "iscsi_delete_portal_group", 00:04:31.002 "iscsi_create_portal_group", 00:04:31.002 "iscsi_get_portal_groups", 00:04:31.002 "iscsi_delete_target_node", 00:04:31.002 "iscsi_target_node_remove_pg_ig_maps", 00:04:31.002 "iscsi_target_node_add_pg_ig_maps", 00:04:31.002 "iscsi_create_target_node", 00:04:31.002 "iscsi_get_target_nodes", 00:04:31.002 "iscsi_delete_initiator_group", 00:04:31.002 "iscsi_initiator_group_remove_initiators", 00:04:31.002 "iscsi_initiator_group_add_initiators", 00:04:31.002 "iscsi_create_initiator_group", 00:04:31.002 "iscsi_get_initiator_groups", 00:04:31.002 "nvmf_set_crdt", 00:04:31.002 "nvmf_set_config", 00:04:31.002 "nvmf_set_max_subsystems", 00:04:31.002 "nvmf_stop_mdns_prr", 00:04:31.002 "nvmf_publish_mdns_prr", 00:04:31.002 "nvmf_subsystem_get_listeners", 00:04:31.002 "nvmf_subsystem_get_qpairs", 00:04:31.002 "nvmf_subsystem_get_controllers", 00:04:31.002 "nvmf_get_stats", 00:04:31.002 "nvmf_get_transports", 00:04:31.002 "nvmf_create_transport", 00:04:31.002 "nvmf_get_targets", 00:04:31.002 "nvmf_delete_target", 00:04:31.002 "nvmf_create_target", 00:04:31.002 "nvmf_subsystem_allow_any_host", 00:04:31.002 "nvmf_subsystem_set_keys", 00:04:31.002 "nvmf_subsystem_remove_host", 00:04:31.002 "nvmf_subsystem_add_host", 00:04:31.002 "nvmf_ns_remove_host", 00:04:31.002 "nvmf_ns_add_host", 00:04:31.002 "nvmf_subsystem_remove_ns", 00:04:31.002 "nvmf_subsystem_set_ns_ana_group", 00:04:31.002 "nvmf_subsystem_add_ns", 00:04:31.002 "nvmf_subsystem_listener_set_ana_state", 00:04:31.002 "nvmf_discovery_get_referrals", 00:04:31.002 "nvmf_discovery_remove_referral", 00:04:31.002 "nvmf_discovery_add_referral", 00:04:31.002 "nvmf_subsystem_remove_listener", 00:04:31.002 "nvmf_subsystem_add_listener", 00:04:31.002 "nvmf_delete_subsystem", 00:04:31.002 "nvmf_create_subsystem", 00:04:31.002 "nvmf_get_subsystems", 00:04:31.002 "env_dpdk_get_mem_stats", 00:04:31.002 "nbd_get_disks", 00:04:31.002 "nbd_stop_disk", 00:04:31.002 "nbd_start_disk", 00:04:31.002 "ublk_recover_disk", 00:04:31.002 "ublk_get_disks", 00:04:31.002 "ublk_stop_disk", 00:04:31.002 "ublk_start_disk", 00:04:31.002 "ublk_destroy_target", 00:04:31.002 "ublk_create_target", 00:04:31.002 "virtio_blk_create_transport", 00:04:31.002 "virtio_blk_get_transports", 00:04:31.002 "vhost_controller_set_coalescing", 00:04:31.002 "vhost_get_controllers", 00:04:31.002 "vhost_delete_controller", 00:04:31.002 "vhost_create_blk_controller", 00:04:31.002 "vhost_scsi_controller_remove_target", 00:04:31.002 "vhost_scsi_controller_add_target", 00:04:31.002 "vhost_start_scsi_controller", 00:04:31.002 "vhost_create_scsi_controller", 00:04:31.002 "thread_set_cpumask", 00:04:31.002 "scheduler_set_options", 00:04:31.002 "framework_get_governor", 00:04:31.002 "framework_get_scheduler", 00:04:31.002 "framework_set_scheduler", 00:04:31.002 "framework_get_reactors", 00:04:31.002 "thread_get_io_channels", 00:04:31.002 "thread_get_pollers", 00:04:31.002 "thread_get_stats", 00:04:31.002 "framework_monitor_context_switch", 00:04:31.002 "spdk_kill_instance", 00:04:31.002 "log_enable_timestamps", 00:04:31.002 "log_get_flags", 00:04:31.002 "log_clear_flag", 00:04:31.002 "log_set_flag", 00:04:31.002 "log_get_level", 00:04:31.002 "log_set_level", 00:04:31.002 "log_get_print_level", 00:04:31.002 "log_set_print_level", 00:04:31.002 "framework_enable_cpumask_locks", 00:04:31.002 "framework_disable_cpumask_locks", 00:04:31.002 "framework_wait_init", 00:04:31.002 "framework_start_init", 00:04:31.002 "scsi_get_devices", 00:04:31.002 "bdev_get_histogram", 00:04:31.002 "bdev_enable_histogram", 00:04:31.002 "bdev_set_qos_limit", 00:04:31.002 "bdev_set_qd_sampling_period", 00:04:31.002 "bdev_get_bdevs", 00:04:31.002 "bdev_reset_iostat", 00:04:31.002 "bdev_get_iostat", 00:04:31.002 "bdev_examine", 00:04:31.002 "bdev_wait_for_examine", 00:04:31.002 "bdev_set_options", 00:04:31.002 "accel_get_stats", 00:04:31.002 "accel_set_options", 00:04:31.002 "accel_set_driver", 00:04:31.002 "accel_crypto_key_destroy", 00:04:31.002 "accel_crypto_keys_get", 00:04:31.002 "accel_crypto_key_create", 00:04:31.002 "accel_assign_opc", 00:04:31.002 "accel_get_module_info", 00:04:31.002 "accel_get_opc_assignments", 00:04:31.002 "vmd_rescan", 00:04:31.002 "vmd_remove_device", 00:04:31.002 "vmd_enable", 00:04:31.002 "sock_get_default_impl", 00:04:31.002 "sock_set_default_impl", 00:04:31.002 "sock_impl_set_options", 00:04:31.002 "sock_impl_get_options", 00:04:31.002 "iobuf_get_stats", 00:04:31.002 "iobuf_set_options", 00:04:31.002 "keyring_get_keys", 00:04:31.002 "framework_get_pci_devices", 00:04:31.002 "framework_get_config", 00:04:31.002 "framework_get_subsystems", 00:04:31.002 "fsdev_set_opts", 00:04:31.002 "fsdev_get_opts", 00:04:31.002 "trace_get_info", 00:04:31.002 "trace_get_tpoint_group_mask", 00:04:31.002 "trace_disable_tpoint_group", 00:04:31.002 "trace_enable_tpoint_group", 00:04:31.002 "trace_clear_tpoint_mask", 00:04:31.002 "trace_set_tpoint_mask", 00:04:31.002 "notify_get_notifications", 00:04:31.002 "notify_get_types", 00:04:31.002 "spdk_get_version", 00:04:31.002 "rpc_get_methods" 00:04:31.002 ] 00:04:31.002 17:34:20 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:31.002 17:34:20 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:31.002 17:34:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:31.002 17:34:20 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:31.002 17:34:20 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58405 00:04:31.002 17:34:20 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 58405 ']' 00:04:31.002 17:34:20 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 58405 00:04:31.002 17:34:20 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:31.002 17:34:20 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:31.002 17:34:20 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58405 00:04:31.002 killing process with pid 58405 00:04:31.002 17:34:20 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:31.002 17:34:20 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:31.002 17:34:20 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58405' 00:04:31.002 17:34:20 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 58405 00:04:31.002 17:34:20 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 58405 00:04:32.377 ************************************ 00:04:32.377 END TEST spdkcli_tcp 00:04:32.377 ************************************ 00:04:32.377 00:04:32.377 real 0m2.522s 00:04:32.377 user 0m4.516s 00:04:32.377 sys 0m0.419s 00:04:32.377 17:34:21 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:32.377 17:34:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:32.377 17:34:21 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:32.377 17:34:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:32.377 17:34:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:32.377 17:34:21 -- common/autotest_common.sh@10 -- # set +x 00:04:32.377 ************************************ 00:04:32.377 START TEST dpdk_mem_utility 00:04:32.377 ************************************ 00:04:32.377 17:34:21 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:32.377 * Looking for test storage... 00:04:32.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:32.377 17:34:22 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:32.377 17:34:22 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:04:32.377 17:34:22 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:32.377 17:34:22 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:32.377 17:34:22 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.377 17:34:22 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.377 17:34:22 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.377 17:34:22 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.377 17:34:22 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.377 17:34:22 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.377 17:34:22 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.377 17:34:22 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.377 17:34:22 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.377 17:34:22 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.377 17:34:22 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.377 17:34:22 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:32.377 17:34:22 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:32.377 17:34:22 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.377 17:34:22 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.377 17:34:22 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:32.377 17:34:22 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:32.377 17:34:22 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.377 17:34:22 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:32.377 17:34:22 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.377 17:34:22 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:32.377 17:34:22 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:32.377 17:34:22 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.377 17:34:22 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:32.377 17:34:22 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.377 17:34:22 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.377 17:34:22 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.377 17:34:22 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:32.377 17:34:22 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.377 17:34:22 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:32.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.377 --rc genhtml_branch_coverage=1 00:04:32.377 --rc genhtml_function_coverage=1 00:04:32.377 --rc genhtml_legend=1 00:04:32.377 --rc geninfo_all_blocks=1 00:04:32.377 --rc geninfo_unexecuted_blocks=1 00:04:32.377 00:04:32.377 ' 00:04:32.377 17:34:22 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:32.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.377 --rc genhtml_branch_coverage=1 00:04:32.377 --rc genhtml_function_coverage=1 00:04:32.377 --rc genhtml_legend=1 00:04:32.377 --rc geninfo_all_blocks=1 00:04:32.377 --rc geninfo_unexecuted_blocks=1 00:04:32.377 00:04:32.377 ' 00:04:32.377 17:34:22 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:32.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.377 --rc genhtml_branch_coverage=1 00:04:32.377 --rc genhtml_function_coverage=1 00:04:32.377 --rc genhtml_legend=1 00:04:32.377 --rc geninfo_all_blocks=1 00:04:32.377 --rc geninfo_unexecuted_blocks=1 00:04:32.377 00:04:32.377 ' 00:04:32.377 17:34:22 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:32.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.377 --rc genhtml_branch_coverage=1 00:04:32.377 --rc genhtml_function_coverage=1 00:04:32.377 --rc genhtml_legend=1 00:04:32.377 --rc geninfo_all_blocks=1 00:04:32.377 --rc geninfo_unexecuted_blocks=1 00:04:32.377 00:04:32.377 ' 00:04:32.377 17:34:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:32.377 17:34:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58511 00:04:32.377 17:34:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58511 00:04:32.377 17:34:22 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 58511 ']' 00:04:32.377 17:34:22 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.378 17:34:22 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:32.378 17:34:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:32.378 17:34:22 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.378 17:34:22 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:32.378 17:34:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:32.639 [2024-10-13 17:34:22.199918] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:04:32.639 [2024-10-13 17:34:22.200276] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58511 ] 00:04:32.639 [2024-10-13 17:34:22.366203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.898 [2024-10-13 17:34:22.475813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.466 17:34:23 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:33.466 17:34:23 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:04:33.466 17:34:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:33.466 17:34:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:33.466 17:34:23 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.466 17:34:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:33.466 { 00:04:33.466 "filename": "/tmp/spdk_mem_dump.txt" 00:04:33.466 } 00:04:33.466 17:34:23 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.467 17:34:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:33.467 DPDK memory size 816.000000 MiB in 1 heap(s) 00:04:33.467 1 heaps totaling size 816.000000 MiB 00:04:33.467 size: 816.000000 MiB heap id: 0 00:04:33.467 end heaps---------- 00:04:33.467 9 mempools totaling size 595.772034 MiB 00:04:33.467 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:33.467 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:33.467 size: 92.545471 MiB name: bdev_io_58511 00:04:33.467 size: 50.003479 MiB name: msgpool_58511 00:04:33.467 size: 36.509338 MiB name: fsdev_io_58511 00:04:33.467 size: 21.763794 MiB name: PDU_Pool 00:04:33.467 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:33.467 size: 4.133484 MiB name: evtpool_58511 00:04:33.467 size: 0.026123 MiB name: Session_Pool 00:04:33.467 end mempools------- 00:04:33.467 6 memzones totaling size 4.142822 MiB 00:04:33.467 size: 1.000366 MiB name: RG_ring_0_58511 00:04:33.467 size: 1.000366 MiB name: RG_ring_1_58511 00:04:33.467 size: 1.000366 MiB name: RG_ring_4_58511 00:04:33.467 size: 1.000366 MiB name: RG_ring_5_58511 00:04:33.467 size: 0.125366 MiB name: RG_ring_2_58511 00:04:33.467 size: 0.015991 MiB name: RG_ring_3_58511 00:04:33.467 end memzones------- 00:04:33.467 17:34:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:33.467 heap id: 0 total size: 816.000000 MiB number of busy elements: 318 number of free elements: 18 00:04:33.467 list of free elements. size: 16.790649 MiB 00:04:33.467 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:33.467 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:33.467 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:33.467 element at address: 0x200018d00040 with size: 0.999939 MiB 00:04:33.467 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:33.467 element at address: 0x200019200000 with size: 0.999084 MiB 00:04:33.467 element at address: 0x200031e00000 with size: 0.994324 MiB 00:04:33.467 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:33.467 element at address: 0x200018a00000 with size: 0.959656 MiB 00:04:33.467 element at address: 0x200019500040 with size: 0.936401 MiB 00:04:33.467 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:33.467 element at address: 0x20001ac00000 with size: 0.560730 MiB 00:04:33.467 element at address: 0x200000c00000 with size: 0.490662 MiB 00:04:33.467 element at address: 0x200018e00000 with size: 0.487976 MiB 00:04:33.467 element at address: 0x200019600000 with size: 0.485413 MiB 00:04:33.467 element at address: 0x200012c00000 with size: 0.443237 MiB 00:04:33.467 element at address: 0x200028000000 with size: 0.390442 MiB 00:04:33.467 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:33.467 list of standard malloc elements. size: 199.288452 MiB 00:04:33.467 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:33.467 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:33.467 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:04:33.467 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:33.467 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:33.467 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:33.467 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:04:33.467 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:33.467 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:33.467 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:04:33.467 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:33.467 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:33.467 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:33.467 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:33.467 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:33.467 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:33.467 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:33.467 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:33.467 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:33.467 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:33.467 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:33.467 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:33.467 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:33.467 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:33.467 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:33.467 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:33.467 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:33.467 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:33.467 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:33.467 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:33.467 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:33.467 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:33.467 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:33.467 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:33.467 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:33.467 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:33.467 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:33.467 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:33.467 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:33.467 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:33.467 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:33.467 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:33.467 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:33.467 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:33.467 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:33.467 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:33.467 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:33.467 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:33.467 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:33.467 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:33.467 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:33.467 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:33.467 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:33.467 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:33.467 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:33.467 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200012c71780 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200012c71880 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200012c71980 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:04:33.467 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:04:33.468 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:04:33.468 element at address: 0x200012c72080 with size: 0.000244 MiB 00:04:33.468 element at address: 0x200012c72180 with size: 0.000244 MiB 00:04:33.468 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:04:33.468 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:04:33.468 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:04:33.468 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:04:33.468 element at address: 0x200028063f40 with size: 0.000244 MiB 00:04:33.468 element at address: 0x200028064040 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806af80 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806b080 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806b180 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806b280 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806b380 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806b480 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806b580 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806b680 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806b780 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806b880 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806b980 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806be80 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806c080 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806c180 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806c280 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806c380 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806c480 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806c580 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806c680 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806c780 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806c880 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806c980 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806d080 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806d180 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806d280 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806d380 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806d480 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806d580 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806d680 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806d780 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806d880 with size: 0.000244 MiB 00:04:33.468 element at address: 0x20002806d980 with size: 0.000244 MiB 00:04:33.469 element at address: 0x20002806da80 with size: 0.000244 MiB 00:04:33.469 element at address: 0x20002806db80 with size: 0.000244 MiB 00:04:33.469 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:04:33.469 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:04:33.469 element at address: 0x20002806de80 with size: 0.000244 MiB 00:04:33.469 element at address: 0x20002806df80 with size: 0.000244 MiB 00:04:33.469 element at address: 0x20002806e080 with size: 0.000244 MiB 00:04:33.469 element at address: 0x20002806e180 with size: 0.000244 MiB 00:04:33.469 element at address: 0x20002806e280 with size: 0.000244 MiB 00:04:33.469 element at address: 0x20002806e380 with size: 0.000244 MiB 00:04:33.469 element at address: 0x20002806e480 with size: 0.000244 MiB 00:04:33.469 element at address: 0x20002806e580 with size: 0.000244 MiB 00:04:33.469 element at address: 0x20002806e680 with size: 0.000244 MiB 00:04:33.469 element at address: 0x20002806e780 with size: 0.000244 MiB 00:04:33.469 element at address: 0x20002806e880 with size: 0.000244 MiB 00:04:33.469 element at address: 0x20002806e980 with size: 0.000244 MiB 00:04:33.469 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:04:33.469 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:04:33.469 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:04:33.469 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:04:33.469 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:04:33.469 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:04:33.469 element at address: 0x20002806f080 with size: 0.000244 MiB 00:04:33.469 element at address: 0x20002806f180 with size: 0.000244 MiB 00:04:33.469 element at address: 0x20002806f280 with size: 0.000244 MiB 00:04:33.469 element at address: 0x20002806f380 with size: 0.000244 MiB 00:04:33.469 element at address: 0x20002806f480 with size: 0.000244 MiB 00:04:33.469 element at address: 0x20002806f580 with size: 0.000244 MiB 00:04:33.469 element at address: 0x20002806f680 with size: 0.000244 MiB 00:04:33.469 element at address: 0x20002806f780 with size: 0.000244 MiB 00:04:33.469 element at address: 0x20002806f880 with size: 0.000244 MiB 00:04:33.469 element at address: 0x20002806f980 with size: 0.000244 MiB 00:04:33.469 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:04:33.469 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:04:33.469 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:04:33.469 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:04:33.469 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:04:33.469 list of memzone associated elements. size: 599.920898 MiB 00:04:33.469 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:04:33.469 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:33.469 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:04:33.469 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:33.469 element at address: 0x200012df4740 with size: 92.045105 MiB 00:04:33.469 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58511_0 00:04:33.469 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:33.469 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58511_0 00:04:33.469 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:33.469 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58511_0 00:04:33.469 element at address: 0x2000197be900 with size: 20.255615 MiB 00:04:33.469 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:33.469 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:04:33.469 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:33.469 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:33.469 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58511_0 00:04:33.469 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:33.469 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58511 00:04:33.469 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:33.469 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58511 00:04:33.469 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:33.469 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:33.469 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:04:33.469 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:33.469 element at address: 0x200018afde00 with size: 1.008179 MiB 00:04:33.469 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:33.469 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:04:33.469 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:33.469 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:33.469 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58511 00:04:33.469 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:33.469 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58511 00:04:33.469 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:04:33.469 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58511 00:04:33.469 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:04:33.469 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58511 00:04:33.469 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:33.469 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58511 00:04:33.469 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:33.469 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58511 00:04:33.469 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:04:33.469 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:33.469 element at address: 0x200012c72280 with size: 0.500549 MiB 00:04:33.469 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:33.469 element at address: 0x20001967c440 with size: 0.250549 MiB 00:04:33.469 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:33.469 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:33.469 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58511 00:04:33.469 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:33.469 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58511 00:04:33.469 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:04:33.469 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:33.469 element at address: 0x200028064140 with size: 0.023804 MiB 00:04:33.469 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:33.469 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:33.469 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58511 00:04:33.469 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:04:33.469 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:33.469 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:33.469 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58511 00:04:33.469 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:33.469 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58511 00:04:33.469 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:33.469 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58511 00:04:33.469 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:04:33.469 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:33.469 17:34:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:33.469 17:34:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58511 00:04:33.469 17:34:23 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 58511 ']' 00:04:33.469 17:34:23 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 58511 00:04:33.469 17:34:23 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:04:33.469 17:34:23 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:33.469 17:34:23 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58511 00:04:33.469 killing process with pid 58511 00:04:33.469 17:34:23 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:33.469 17:34:23 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:33.469 17:34:23 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58511' 00:04:33.469 17:34:23 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 58511 00:04:33.469 17:34:23 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 58511 00:04:34.846 00:04:34.846 real 0m2.569s 00:04:34.846 user 0m2.530s 00:04:34.846 sys 0m0.393s 00:04:34.846 17:34:24 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.846 ************************************ 00:04:34.846 END TEST dpdk_mem_utility 00:04:34.846 ************************************ 00:04:34.846 17:34:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:34.846 17:34:24 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:34.846 17:34:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:34.846 17:34:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.846 17:34:24 -- common/autotest_common.sh@10 -- # set +x 00:04:34.846 ************************************ 00:04:34.846 START TEST event 00:04:34.846 ************************************ 00:04:34.846 17:34:24 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:34.846 * Looking for test storage... 00:04:34.846 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:34.846 17:34:24 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:34.846 17:34:24 event -- common/autotest_common.sh@1691 -- # lcov --version 00:04:34.846 17:34:24 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:35.104 17:34:24 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:35.104 17:34:24 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.104 17:34:24 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.104 17:34:24 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.104 17:34:24 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.104 17:34:24 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.104 17:34:24 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.104 17:34:24 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.104 17:34:24 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.104 17:34:24 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.104 17:34:24 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.104 17:34:24 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.104 17:34:24 event -- scripts/common.sh@344 -- # case "$op" in 00:04:35.104 17:34:24 event -- scripts/common.sh@345 -- # : 1 00:04:35.104 17:34:24 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.104 17:34:24 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.104 17:34:24 event -- scripts/common.sh@365 -- # decimal 1 00:04:35.105 17:34:24 event -- scripts/common.sh@353 -- # local d=1 00:04:35.105 17:34:24 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.105 17:34:24 event -- scripts/common.sh@355 -- # echo 1 00:04:35.105 17:34:24 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.105 17:34:24 event -- scripts/common.sh@366 -- # decimal 2 00:04:35.105 17:34:24 event -- scripts/common.sh@353 -- # local d=2 00:04:35.105 17:34:24 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.105 17:34:24 event -- scripts/common.sh@355 -- # echo 2 00:04:35.105 17:34:24 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.105 17:34:24 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.105 17:34:24 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.105 17:34:24 event -- scripts/common.sh@368 -- # return 0 00:04:35.105 17:34:24 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.105 17:34:24 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:35.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.105 --rc genhtml_branch_coverage=1 00:04:35.105 --rc genhtml_function_coverage=1 00:04:35.105 --rc genhtml_legend=1 00:04:35.105 --rc geninfo_all_blocks=1 00:04:35.105 --rc geninfo_unexecuted_blocks=1 00:04:35.105 00:04:35.105 ' 00:04:35.105 17:34:24 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:35.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.105 --rc genhtml_branch_coverage=1 00:04:35.105 --rc genhtml_function_coverage=1 00:04:35.105 --rc genhtml_legend=1 00:04:35.105 --rc geninfo_all_blocks=1 00:04:35.105 --rc geninfo_unexecuted_blocks=1 00:04:35.105 00:04:35.105 ' 00:04:35.105 17:34:24 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:35.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.105 --rc genhtml_branch_coverage=1 00:04:35.105 --rc genhtml_function_coverage=1 00:04:35.105 --rc genhtml_legend=1 00:04:35.105 --rc geninfo_all_blocks=1 00:04:35.105 --rc geninfo_unexecuted_blocks=1 00:04:35.105 00:04:35.105 ' 00:04:35.105 17:34:24 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:35.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.105 --rc genhtml_branch_coverage=1 00:04:35.105 --rc genhtml_function_coverage=1 00:04:35.105 --rc genhtml_legend=1 00:04:35.105 --rc geninfo_all_blocks=1 00:04:35.105 --rc geninfo_unexecuted_blocks=1 00:04:35.105 00:04:35.105 ' 00:04:35.105 17:34:24 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:35.105 17:34:24 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:35.105 17:34:24 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:35.105 17:34:24 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:04:35.105 17:34:24 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.105 17:34:24 event -- common/autotest_common.sh@10 -- # set +x 00:04:35.105 ************************************ 00:04:35.105 START TEST event_perf 00:04:35.105 ************************************ 00:04:35.105 17:34:24 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:35.105 Running I/O for 1 seconds...[2024-10-13 17:34:24.764272] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:04:35.105 [2024-10-13 17:34:24.764468] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58608 ] 00:04:35.105 [2024-10-13 17:34:24.914000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:35.364 [2024-10-13 17:34:25.014298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:35.364 [2024-10-13 17:34:25.014609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:35.364 [2024-10-13 17:34:25.015517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.364 Running I/O for 1 seconds...[2024-10-13 17:34:25.015536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:36.739 00:04:36.739 lcore 0: 140222 00:04:36.739 lcore 1: 140218 00:04:36.739 lcore 2: 140220 00:04:36.739 lcore 3: 140219 00:04:36.739 done. 00:04:36.739 ************************************ 00:04:36.739 END TEST event_perf 00:04:36.739 ************************************ 00:04:36.739 00:04:36.739 real 0m1.449s 00:04:36.739 user 0m4.244s 00:04:36.739 sys 0m0.081s 00:04:36.739 17:34:26 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.739 17:34:26 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:36.739 17:34:26 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:36.739 17:34:26 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:36.739 17:34:26 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.739 17:34:26 event -- common/autotest_common.sh@10 -- # set +x 00:04:36.739 ************************************ 00:04:36.739 START TEST event_reactor 00:04:36.739 ************************************ 00:04:36.739 17:34:26 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:36.739 [2024-10-13 17:34:26.259565] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:04:36.739 [2024-10-13 17:34:26.260276] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58642 ] 00:04:36.739 [2024-10-13 17:34:26.418458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.739 [2024-10-13 17:34:26.514633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.118 test_start 00:04:38.118 oneshot 00:04:38.118 tick 100 00:04:38.118 tick 100 00:04:38.118 tick 250 00:04:38.118 tick 100 00:04:38.118 tick 100 00:04:38.118 tick 100 00:04:38.118 tick 250 00:04:38.118 tick 500 00:04:38.118 tick 100 00:04:38.118 tick 100 00:04:38.118 tick 250 00:04:38.118 tick 100 00:04:38.118 tick 100 00:04:38.118 test_end 00:04:38.118 00:04:38.118 real 0m1.453s 00:04:38.118 user 0m1.280s 00:04:38.118 sys 0m0.063s 00:04:38.118 17:34:27 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.118 17:34:27 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:38.118 ************************************ 00:04:38.118 END TEST event_reactor 00:04:38.118 ************************************ 00:04:38.118 17:34:27 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:38.118 17:34:27 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:38.118 17:34:27 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.118 17:34:27 event -- common/autotest_common.sh@10 -- # set +x 00:04:38.118 ************************************ 00:04:38.118 START TEST event_reactor_perf 00:04:38.118 ************************************ 00:04:38.118 17:34:27 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:38.118 [2024-10-13 17:34:27.764532] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:04:38.118 [2024-10-13 17:34:27.764695] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58683 ] 00:04:38.118 [2024-10-13 17:34:27.909407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.378 [2024-10-13 17:34:28.029895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.760 test_start 00:04:39.760 test_end 00:04:39.760 Performance: 312367 events per second 00:04:39.760 00:04:39.760 real 0m1.454s 00:04:39.760 user 0m1.278s 00:04:39.760 sys 0m0.069s 00:04:39.760 17:34:29 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.760 17:34:29 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:39.760 ************************************ 00:04:39.760 END TEST event_reactor_perf 00:04:39.760 ************************************ 00:04:39.760 17:34:29 event -- event/event.sh@49 -- # uname -s 00:04:39.760 17:34:29 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:39.760 17:34:29 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:39.760 17:34:29 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.760 17:34:29 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.760 17:34:29 event -- common/autotest_common.sh@10 -- # set +x 00:04:39.760 ************************************ 00:04:39.760 START TEST event_scheduler 00:04:39.760 ************************************ 00:04:39.760 17:34:29 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:39.760 * Looking for test storage... 00:04:39.760 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:39.760 17:34:29 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:39.760 17:34:29 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:04:39.760 17:34:29 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:39.760 17:34:29 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:39.760 17:34:29 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.760 17:34:29 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.760 17:34:29 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.760 17:34:29 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.760 17:34:29 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.760 17:34:29 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.760 17:34:29 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.760 17:34:29 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.760 17:34:29 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.760 17:34:29 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.760 17:34:29 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.760 17:34:29 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:39.760 17:34:29 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:39.760 17:34:29 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.760 17:34:29 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.760 17:34:29 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:39.760 17:34:29 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:39.760 17:34:29 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.760 17:34:29 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:39.760 17:34:29 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.760 17:34:29 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:39.760 17:34:29 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:39.760 17:34:29 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.760 17:34:29 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:39.760 17:34:29 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.760 17:34:29 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.760 17:34:29 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.760 17:34:29 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:39.760 17:34:29 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.760 17:34:29 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:39.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.760 --rc genhtml_branch_coverage=1 00:04:39.760 --rc genhtml_function_coverage=1 00:04:39.760 --rc genhtml_legend=1 00:04:39.760 --rc geninfo_all_blocks=1 00:04:39.760 --rc geninfo_unexecuted_blocks=1 00:04:39.760 00:04:39.760 ' 00:04:39.760 17:34:29 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:39.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.760 --rc genhtml_branch_coverage=1 00:04:39.760 --rc genhtml_function_coverage=1 00:04:39.760 --rc genhtml_legend=1 00:04:39.760 --rc geninfo_all_blocks=1 00:04:39.760 --rc geninfo_unexecuted_blocks=1 00:04:39.760 00:04:39.760 ' 00:04:39.760 17:34:29 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:39.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.760 --rc genhtml_branch_coverage=1 00:04:39.760 --rc genhtml_function_coverage=1 00:04:39.760 --rc genhtml_legend=1 00:04:39.760 --rc geninfo_all_blocks=1 00:04:39.760 --rc geninfo_unexecuted_blocks=1 00:04:39.760 00:04:39.760 ' 00:04:39.760 17:34:29 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:39.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.761 --rc genhtml_branch_coverage=1 00:04:39.761 --rc genhtml_function_coverage=1 00:04:39.761 --rc genhtml_legend=1 00:04:39.761 --rc geninfo_all_blocks=1 00:04:39.761 --rc geninfo_unexecuted_blocks=1 00:04:39.761 00:04:39.761 ' 00:04:39.761 17:34:29 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:39.761 17:34:29 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58749 00:04:39.761 17:34:29 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.761 17:34:29 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58749 00:04:39.761 17:34:29 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 58749 ']' 00:04:39.761 17:34:29 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:39.761 17:34:29 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.761 17:34:29 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:39.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.761 17:34:29 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.761 17:34:29 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:39.761 17:34:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:39.761 [2024-10-13 17:34:29.457221] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:04:39.761 [2024-10-13 17:34:29.457348] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58749 ] 00:04:40.021 [2024-10-13 17:34:29.601376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:40.021 [2024-10-13 17:34:29.707283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.021 [2024-10-13 17:34:29.707677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.021 [2024-10-13 17:34:29.708050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:40.021 [2024-10-13 17:34:29.708065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:40.592 17:34:30 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:40.592 17:34:30 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:04:40.592 17:34:30 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:40.592 17:34:30 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.592 17:34:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:40.592 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:40.592 POWER: Cannot set governor of lcore 0 to userspace 00:04:40.592 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:40.592 POWER: Cannot set governor of lcore 0 to performance 00:04:40.592 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:40.592 POWER: Cannot set governor of lcore 0 to userspace 00:04:40.592 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:40.592 POWER: Cannot set governor of lcore 0 to userspace 00:04:40.592 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:40.592 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:40.592 POWER: Unable to set Power Management Environment for lcore 0 00:04:40.592 [2024-10-13 17:34:30.303516] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:04:40.592 [2024-10-13 17:34:30.303596] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:04:40.592 [2024-10-13 17:34:30.303656] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:40.592 [2024-10-13 17:34:30.303715] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:40.592 [2024-10-13 17:34:30.303739] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:40.592 [2024-10-13 17:34:30.303787] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:40.592 17:34:30 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.592 17:34:30 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:40.592 17:34:30 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.592 17:34:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:40.852 [2024-10-13 17:34:30.523973] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:40.852 17:34:30 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.852 17:34:30 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:40.852 17:34:30 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.852 17:34:30 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.852 17:34:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:40.852 ************************************ 00:04:40.852 START TEST scheduler_create_thread 00:04:40.852 ************************************ 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.852 2 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.852 3 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.852 4 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.852 5 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.852 6 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.852 7 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.852 8 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.852 9 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.852 10 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.852 17:34:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:41.421 17:34:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.421 ************************************ 00:04:41.421 END TEST scheduler_create_thread 00:04:41.421 ************************************ 00:04:41.421 00:04:41.421 real 0m0.591s 00:04:41.421 user 0m0.015s 00:04:41.421 sys 0m0.001s 00:04:41.421 17:34:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.421 17:34:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:41.421 17:34:31 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:41.421 17:34:31 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58749 00:04:41.421 17:34:31 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 58749 ']' 00:04:41.421 17:34:31 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 58749 00:04:41.421 17:34:31 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:04:41.421 17:34:31 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:41.421 17:34:31 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58749 00:04:41.421 killing process with pid 58749 00:04:41.421 17:34:31 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:04:41.421 17:34:31 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:04:41.421 17:34:31 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58749' 00:04:41.421 17:34:31 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 58749 00:04:41.421 17:34:31 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 58749 00:04:41.987 [2024-10-13 17:34:31.605091] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:42.556 ************************************ 00:04:42.556 END TEST event_scheduler 00:04:42.556 ************************************ 00:04:42.556 00:04:42.556 real 0m3.035s 00:04:42.556 user 0m5.887s 00:04:42.556 sys 0m0.356s 00:04:42.556 17:34:32 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.556 17:34:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:42.556 17:34:32 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:42.556 17:34:32 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:42.556 17:34:32 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.556 17:34:32 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.556 17:34:32 event -- common/autotest_common.sh@10 -- # set +x 00:04:42.556 ************************************ 00:04:42.556 START TEST app_repeat 00:04:42.556 ************************************ 00:04:42.556 17:34:32 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:04:42.556 17:34:32 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.556 17:34:32 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:42.556 17:34:32 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:42.556 17:34:32 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:42.556 17:34:32 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:42.556 17:34:32 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:42.556 17:34:32 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:42.556 Process app_repeat pid: 58833 00:04:42.556 spdk_app_start Round 0 00:04:42.556 17:34:32 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58833 00:04:42.556 17:34:32 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.556 17:34:32 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58833' 00:04:42.556 17:34:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:42.556 17:34:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:42.556 17:34:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58833 /var/tmp/spdk-nbd.sock 00:04:42.556 17:34:32 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58833 ']' 00:04:42.556 17:34:32 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:42.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:42.556 17:34:32 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:42.556 17:34:32 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:42.556 17:34:32 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:42.556 17:34:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:42.556 17:34:32 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:42.816 [2024-10-13 17:34:32.373407] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:04:42.816 [2024-10-13 17:34:32.373668] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58833 ] 00:04:42.816 [2024-10-13 17:34:32.520131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:43.076 [2024-10-13 17:34:32.641175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.076 [2024-10-13 17:34:32.641315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.647 17:34:33 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:43.647 17:34:33 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:43.647 17:34:33 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:43.907 Malloc0 00:04:43.907 17:34:33 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:44.168 Malloc1 00:04:44.168 17:34:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:44.168 17:34:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.168 17:34:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:44.168 17:34:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:44.168 17:34:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.168 17:34:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:44.168 17:34:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:44.168 17:34:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.168 17:34:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:44.168 17:34:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:44.168 17:34:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.168 17:34:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:44.168 17:34:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:44.168 17:34:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:44.168 17:34:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:44.168 17:34:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:44.168 /dev/nbd0 00:04:44.429 17:34:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:44.429 17:34:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:44.429 17:34:33 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:44.429 17:34:33 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:44.429 17:34:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:44.429 17:34:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:44.429 17:34:33 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:44.429 17:34:33 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:44.429 17:34:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:44.429 17:34:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:44.429 17:34:33 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:44.429 1+0 records in 00:04:44.429 1+0 records out 00:04:44.429 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258775 s, 15.8 MB/s 00:04:44.429 17:34:34 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:44.429 17:34:34 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:44.429 17:34:34 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:44.429 17:34:34 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:44.429 17:34:34 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:44.429 17:34:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:44.429 17:34:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:44.429 17:34:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:44.429 /dev/nbd1 00:04:44.429 17:34:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:44.429 17:34:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:44.429 17:34:34 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:44.429 17:34:34 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:44.429 17:34:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:44.429 17:34:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:44.429 17:34:34 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:44.429 17:34:34 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:44.429 17:34:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:44.429 17:34:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:44.429 17:34:34 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:44.429 1+0 records in 00:04:44.429 1+0 records out 00:04:44.429 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279013 s, 14.7 MB/s 00:04:44.429 17:34:34 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:44.690 17:34:34 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:44.690 17:34:34 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:44.690 17:34:34 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:44.690 17:34:34 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:44.690 17:34:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:44.690 17:34:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:44.690 17:34:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:44.690 17:34:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.690 17:34:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:44.690 17:34:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:44.690 { 00:04:44.690 "nbd_device": "/dev/nbd0", 00:04:44.690 "bdev_name": "Malloc0" 00:04:44.690 }, 00:04:44.690 { 00:04:44.690 "nbd_device": "/dev/nbd1", 00:04:44.690 "bdev_name": "Malloc1" 00:04:44.690 } 00:04:44.690 ]' 00:04:44.690 17:34:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:44.690 { 00:04:44.690 "nbd_device": "/dev/nbd0", 00:04:44.690 "bdev_name": "Malloc0" 00:04:44.690 }, 00:04:44.690 { 00:04:44.690 "nbd_device": "/dev/nbd1", 00:04:44.690 "bdev_name": "Malloc1" 00:04:44.690 } 00:04:44.690 ]' 00:04:44.690 17:34:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:44.690 17:34:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:44.690 /dev/nbd1' 00:04:44.690 17:34:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:44.690 17:34:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:44.690 /dev/nbd1' 00:04:44.690 17:34:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:44.690 17:34:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:44.690 17:34:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:44.690 17:34:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:44.690 17:34:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:44.690 17:34:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.690 17:34:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:44.690 17:34:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:44.690 17:34:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:44.690 17:34:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:44.690 17:34:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:44.951 256+0 records in 00:04:44.951 256+0 records out 00:04:44.951 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00743425 s, 141 MB/s 00:04:44.951 17:34:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:44.951 17:34:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:44.951 256+0 records in 00:04:44.951 256+0 records out 00:04:44.951 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0175282 s, 59.8 MB/s 00:04:44.951 17:34:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:44.952 17:34:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:44.952 256+0 records in 00:04:44.952 256+0 records out 00:04:44.952 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219255 s, 47.8 MB/s 00:04:44.952 17:34:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:44.952 17:34:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.952 17:34:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:44.952 17:34:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:44.952 17:34:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:44.952 17:34:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:44.952 17:34:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:44.952 17:34:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:44.952 17:34:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:44.952 17:34:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:44.952 17:34:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:44.952 17:34:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:44.952 17:34:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:44.952 17:34:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.952 17:34:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.952 17:34:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:44.952 17:34:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:44.952 17:34:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:44.952 17:34:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:45.212 17:34:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:45.212 17:34:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:45.212 17:34:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:45.212 17:34:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:45.212 17:34:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:45.212 17:34:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:45.212 17:34:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:45.212 17:34:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:45.212 17:34:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:45.212 17:34:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:45.212 17:34:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:45.212 17:34:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:45.212 17:34:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:45.212 17:34:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:45.212 17:34:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:45.212 17:34:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:45.212 17:34:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:45.212 17:34:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:45.212 17:34:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:45.212 17:34:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.212 17:34:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:45.472 17:34:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:45.472 17:34:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:45.472 17:34:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:45.472 17:34:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:45.472 17:34:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:45.472 17:34:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:45.472 17:34:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:45.472 17:34:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:45.472 17:34:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:45.472 17:34:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:45.472 17:34:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:45.472 17:34:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:45.472 17:34:35 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:46.042 17:34:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:46.612 [2024-10-13 17:34:36.259675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:46.612 [2024-10-13 17:34:36.350588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.612 [2024-10-13 17:34:36.350592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.872 [2024-10-13 17:34:36.463859] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:46.872 [2024-10-13 17:34:36.463941] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:48.780 17:34:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:48.780 spdk_app_start Round 1 00:04:48.780 17:34:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:48.780 17:34:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58833 /var/tmp/spdk-nbd.sock 00:04:48.780 17:34:38 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58833 ']' 00:04:48.780 17:34:38 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:48.780 17:34:38 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:48.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:48.780 17:34:38 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:48.780 17:34:38 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:48.780 17:34:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:49.040 17:34:38 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:49.040 17:34:38 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:49.040 17:34:38 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:49.301 Malloc0 00:04:49.301 17:34:38 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:49.561 Malloc1 00:04:49.561 17:34:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:49.561 17:34:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.561 17:34:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:49.561 17:34:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:49.561 17:34:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.561 17:34:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:49.561 17:34:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:49.561 17:34:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.561 17:34:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:49.561 17:34:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:49.561 17:34:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.561 17:34:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:49.561 17:34:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:49.561 17:34:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:49.561 17:34:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:49.561 17:34:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:49.822 /dev/nbd0 00:04:49.822 17:34:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:49.822 17:34:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:49.822 17:34:39 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:49.822 17:34:39 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:49.822 17:34:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:49.822 17:34:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:49.822 17:34:39 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:49.822 17:34:39 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:49.822 17:34:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:49.822 17:34:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:49.822 17:34:39 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:49.822 1+0 records in 00:04:49.822 1+0 records out 00:04:49.822 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000581237 s, 7.0 MB/s 00:04:49.822 17:34:39 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:49.822 17:34:39 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:49.822 17:34:39 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:49.822 17:34:39 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:49.822 17:34:39 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:49.822 17:34:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:49.822 17:34:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:49.822 17:34:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:50.083 /dev/nbd1 00:04:50.083 17:34:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:50.083 17:34:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:50.083 17:34:39 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:50.083 17:34:39 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:50.083 17:34:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:50.083 17:34:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:50.083 17:34:39 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:50.083 17:34:39 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:50.083 17:34:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:50.083 17:34:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:50.083 17:34:39 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:50.083 1+0 records in 00:04:50.083 1+0 records out 00:04:50.083 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466808 s, 8.8 MB/s 00:04:50.083 17:34:39 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:50.083 17:34:39 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:50.083 17:34:39 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:50.083 17:34:39 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:50.083 17:34:39 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:50.083 17:34:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:50.083 17:34:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.083 17:34:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:50.083 17:34:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.083 17:34:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:50.343 17:34:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:50.343 { 00:04:50.343 "nbd_device": "/dev/nbd0", 00:04:50.343 "bdev_name": "Malloc0" 00:04:50.343 }, 00:04:50.343 { 00:04:50.343 "nbd_device": "/dev/nbd1", 00:04:50.343 "bdev_name": "Malloc1" 00:04:50.343 } 00:04:50.343 ]' 00:04:50.343 17:34:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:50.343 17:34:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:50.343 { 00:04:50.343 "nbd_device": "/dev/nbd0", 00:04:50.343 "bdev_name": "Malloc0" 00:04:50.343 }, 00:04:50.343 { 00:04:50.343 "nbd_device": "/dev/nbd1", 00:04:50.343 "bdev_name": "Malloc1" 00:04:50.343 } 00:04:50.343 ]' 00:04:50.343 17:34:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:50.343 /dev/nbd1' 00:04:50.343 17:34:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:50.343 17:34:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:50.343 /dev/nbd1' 00:04:50.343 17:34:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:50.343 17:34:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:50.343 17:34:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:50.343 17:34:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:50.343 17:34:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:50.343 17:34:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.343 17:34:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:50.343 17:34:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:50.343 17:34:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:50.343 17:34:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:50.343 17:34:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:50.343 256+0 records in 00:04:50.343 256+0 records out 00:04:50.343 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00642208 s, 163 MB/s 00:04:50.343 17:34:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:50.343 17:34:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:50.343 256+0 records in 00:04:50.343 256+0 records out 00:04:50.343 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151871 s, 69.0 MB/s 00:04:50.343 17:34:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:50.343 17:34:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:50.343 256+0 records in 00:04:50.343 256+0 records out 00:04:50.343 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0177844 s, 59.0 MB/s 00:04:50.343 17:34:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:50.343 17:34:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.343 17:34:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:50.343 17:34:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:50.343 17:34:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:50.343 17:34:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:50.343 17:34:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:50.343 17:34:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:50.343 17:34:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:50.343 17:34:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:50.343 17:34:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:50.343 17:34:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:50.343 17:34:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:50.343 17:34:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.343 17:34:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.343 17:34:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:50.343 17:34:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:50.343 17:34:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:50.343 17:34:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:50.601 17:34:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:50.601 17:34:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:50.601 17:34:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:50.601 17:34:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:50.601 17:34:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:50.601 17:34:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:50.601 17:34:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:50.601 17:34:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:50.601 17:34:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:50.601 17:34:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:50.860 17:34:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:50.860 17:34:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:50.860 17:34:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:50.860 17:34:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:50.860 17:34:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:50.860 17:34:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:50.860 17:34:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:50.860 17:34:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:50.860 17:34:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:50.860 17:34:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.860 17:34:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:50.860 17:34:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:50.860 17:34:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:50.860 17:34:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:51.118 17:34:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:51.118 17:34:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:51.118 17:34:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:51.118 17:34:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:51.118 17:34:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:51.118 17:34:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:51.118 17:34:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:51.118 17:34:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:51.118 17:34:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:51.118 17:34:40 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:51.376 17:34:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:51.942 [2024-10-13 17:34:41.579128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:51.942 [2024-10-13 17:34:41.662336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.942 [2024-10-13 17:34:41.662428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.200 [2024-10-13 17:34:41.776272] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:52.200 [2024-10-13 17:34:41.776347] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:54.729 spdk_app_start Round 2 00:04:54.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:54.729 17:34:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:54.729 17:34:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:54.729 17:34:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58833 /var/tmp/spdk-nbd.sock 00:04:54.729 17:34:44 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58833 ']' 00:04:54.729 17:34:44 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:54.729 17:34:44 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:54.729 17:34:44 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:54.729 17:34:44 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:54.729 17:34:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:54.729 17:34:44 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:54.729 17:34:44 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:54.729 17:34:44 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:54.729 Malloc0 00:04:54.729 17:34:44 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:54.987 Malloc1 00:04:54.987 17:34:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:54.987 17:34:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.987 17:34:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:54.987 17:34:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:54.987 17:34:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.987 17:34:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:54.987 17:34:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:54.987 17:34:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.987 17:34:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:54.987 17:34:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:54.987 17:34:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.988 17:34:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:54.988 17:34:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:54.988 17:34:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:54.988 17:34:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.988 17:34:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:55.246 /dev/nbd0 00:04:55.246 17:34:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:55.246 17:34:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:55.246 17:34:44 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:55.246 17:34:44 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:55.246 17:34:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:55.246 17:34:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:55.246 17:34:44 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:55.246 17:34:44 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:55.246 17:34:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:55.246 17:34:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:55.246 17:34:44 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:55.246 1+0 records in 00:04:55.246 1+0 records out 00:04:55.246 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182337 s, 22.5 MB/s 00:04:55.246 17:34:44 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:55.246 17:34:44 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:55.246 17:34:44 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:55.246 17:34:44 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:55.246 17:34:44 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:55.246 17:34:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:55.246 17:34:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.246 17:34:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:55.504 /dev/nbd1 00:04:55.504 17:34:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:55.504 17:34:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:55.504 17:34:45 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:55.504 17:34:45 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:55.504 17:34:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:55.504 17:34:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:55.504 17:34:45 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:55.504 17:34:45 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:55.504 17:34:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:55.504 17:34:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:55.504 17:34:45 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:55.504 1+0 records in 00:04:55.504 1+0 records out 00:04:55.504 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277587 s, 14.8 MB/s 00:04:55.504 17:34:45 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:55.504 17:34:45 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:55.504 17:34:45 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:55.504 17:34:45 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:55.504 17:34:45 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:55.504 17:34:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:55.504 17:34:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.504 17:34:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:55.504 17:34:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.504 17:34:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:55.762 { 00:04:55.762 "nbd_device": "/dev/nbd0", 00:04:55.762 "bdev_name": "Malloc0" 00:04:55.762 }, 00:04:55.762 { 00:04:55.762 "nbd_device": "/dev/nbd1", 00:04:55.762 "bdev_name": "Malloc1" 00:04:55.762 } 00:04:55.762 ]' 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:55.762 { 00:04:55.762 "nbd_device": "/dev/nbd0", 00:04:55.762 "bdev_name": "Malloc0" 00:04:55.762 }, 00:04:55.762 { 00:04:55.762 "nbd_device": "/dev/nbd1", 00:04:55.762 "bdev_name": "Malloc1" 00:04:55.762 } 00:04:55.762 ]' 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:55.762 /dev/nbd1' 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:55.762 /dev/nbd1' 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:55.762 256+0 records in 00:04:55.762 256+0 records out 00:04:55.762 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00746388 s, 140 MB/s 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:55.762 256+0 records in 00:04:55.762 256+0 records out 00:04:55.762 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0156822 s, 66.9 MB/s 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:55.762 256+0 records in 00:04:55.762 256+0 records out 00:04:55.762 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0186165 s, 56.3 MB/s 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:55.762 17:34:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:56.021 17:34:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:56.021 17:34:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:56.021 17:34:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:56.021 17:34:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:56.021 17:34:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:56.021 17:34:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:56.021 17:34:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:56.021 17:34:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:56.021 17:34:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:56.021 17:34:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:56.279 17:34:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:56.279 17:34:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:56.279 17:34:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:56.279 17:34:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:56.279 17:34:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:56.279 17:34:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:56.279 17:34:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:56.279 17:34:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:56.279 17:34:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:56.279 17:34:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.279 17:34:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:56.537 17:34:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:56.537 17:34:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:56.537 17:34:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:56.537 17:34:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:56.537 17:34:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:56.537 17:34:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:56.537 17:34:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:56.537 17:34:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:56.537 17:34:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:56.537 17:34:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:56.537 17:34:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:56.537 17:34:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:56.537 17:34:46 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:56.795 17:34:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:57.360 [2024-10-13 17:34:47.031743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:57.360 [2024-10-13 17:34:47.123966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.360 [2024-10-13 17:34:47.124069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.618 [2024-10-13 17:34:47.231711] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:57.618 [2024-10-13 17:34:47.231772] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:00.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:00.147 17:34:49 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58833 /var/tmp/spdk-nbd.sock 00:05:00.147 17:34:49 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58833 ']' 00:05:00.147 17:34:49 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:00.147 17:34:49 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:00.147 17:34:49 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:00.147 17:34:49 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:00.147 17:34:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:00.147 17:34:49 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:00.147 17:34:49 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:00.147 17:34:49 event.app_repeat -- event/event.sh@39 -- # killprocess 58833 00:05:00.147 17:34:49 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 58833 ']' 00:05:00.147 17:34:49 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 58833 00:05:00.148 17:34:49 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:00.148 17:34:49 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:00.148 17:34:49 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58833 00:05:00.148 killing process with pid 58833 00:05:00.148 17:34:49 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:00.148 17:34:49 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:00.148 17:34:49 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58833' 00:05:00.148 17:34:49 event.app_repeat -- common/autotest_common.sh@969 -- # kill 58833 00:05:00.148 17:34:49 event.app_repeat -- common/autotest_common.sh@974 -- # wait 58833 00:05:00.406 spdk_app_start is called in Round 0. 00:05:00.406 Shutdown signal received, stop current app iteration 00:05:00.406 Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 reinitialization... 00:05:00.406 spdk_app_start is called in Round 1. 00:05:00.406 Shutdown signal received, stop current app iteration 00:05:00.406 Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 reinitialization... 00:05:00.406 spdk_app_start is called in Round 2. 00:05:00.406 Shutdown signal received, stop current app iteration 00:05:00.406 Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 reinitialization... 00:05:00.406 spdk_app_start is called in Round 3. 00:05:00.406 Shutdown signal received, stop current app iteration 00:05:00.664 ************************************ 00:05:00.664 END TEST app_repeat 00:05:00.664 ************************************ 00:05:00.664 17:34:50 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:00.664 17:34:50 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:00.664 00:05:00.664 real 0m17.911s 00:05:00.664 user 0m39.053s 00:05:00.664 sys 0m2.242s 00:05:00.664 17:34:50 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.664 17:34:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:00.664 17:34:50 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:00.664 17:34:50 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:00.664 17:34:50 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.664 17:34:50 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.664 17:34:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.664 ************************************ 00:05:00.664 START TEST cpu_locks 00:05:00.664 ************************************ 00:05:00.664 17:34:50 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:00.664 * Looking for test storage... 00:05:00.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:00.664 17:34:50 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:00.664 17:34:50 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:00.664 17:34:50 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:05:00.664 17:34:50 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:00.664 17:34:50 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.664 17:34:50 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.664 17:34:50 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.664 17:34:50 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.664 17:34:50 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.664 17:34:50 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.664 17:34:50 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.664 17:34:50 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.664 17:34:50 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.664 17:34:50 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.664 17:34:50 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.664 17:34:50 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:00.664 17:34:50 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:00.664 17:34:50 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.664 17:34:50 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.664 17:34:50 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:00.664 17:34:50 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:00.664 17:34:50 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.664 17:34:50 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:00.664 17:34:50 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.664 17:34:50 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:00.664 17:34:50 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:00.664 17:34:50 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.664 17:34:50 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:00.664 17:34:50 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.664 17:34:50 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.664 17:34:50 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.664 17:34:50 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:00.664 17:34:50 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.664 17:34:50 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:00.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.664 --rc genhtml_branch_coverage=1 00:05:00.664 --rc genhtml_function_coverage=1 00:05:00.664 --rc genhtml_legend=1 00:05:00.664 --rc geninfo_all_blocks=1 00:05:00.664 --rc geninfo_unexecuted_blocks=1 00:05:00.664 00:05:00.664 ' 00:05:00.664 17:34:50 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:00.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.664 --rc genhtml_branch_coverage=1 00:05:00.664 --rc genhtml_function_coverage=1 00:05:00.664 --rc genhtml_legend=1 00:05:00.664 --rc geninfo_all_blocks=1 00:05:00.664 --rc geninfo_unexecuted_blocks=1 00:05:00.664 00:05:00.665 ' 00:05:00.665 17:34:50 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:00.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.665 --rc genhtml_branch_coverage=1 00:05:00.665 --rc genhtml_function_coverage=1 00:05:00.665 --rc genhtml_legend=1 00:05:00.665 --rc geninfo_all_blocks=1 00:05:00.665 --rc geninfo_unexecuted_blocks=1 00:05:00.665 00:05:00.665 ' 00:05:00.665 17:34:50 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:00.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.665 --rc genhtml_branch_coverage=1 00:05:00.665 --rc genhtml_function_coverage=1 00:05:00.665 --rc genhtml_legend=1 00:05:00.665 --rc geninfo_all_blocks=1 00:05:00.665 --rc geninfo_unexecuted_blocks=1 00:05:00.665 00:05:00.665 ' 00:05:00.665 17:34:50 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:00.665 17:34:50 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:00.665 17:34:50 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:00.665 17:34:50 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:00.665 17:34:50 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.665 17:34:50 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.665 17:34:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:00.665 ************************************ 00:05:00.665 START TEST default_locks 00:05:00.665 ************************************ 00:05:00.665 17:34:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:00.665 17:34:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59269 00:05:00.665 17:34:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59269 00:05:00.665 17:34:50 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 59269 ']' 00:05:00.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.665 17:34:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:00.665 17:34:50 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.665 17:34:50 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:00.665 17:34:50 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.665 17:34:50 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:00.665 17:34:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:00.923 [2024-10-13 17:34:50.522787] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:00.923 [2024-10-13 17:34:50.522920] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59269 ] 00:05:00.923 [2024-10-13 17:34:50.674269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.182 [2024-10-13 17:34:50.782986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.748 17:34:51 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:01.748 17:34:51 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:01.748 17:34:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59269 00:05:01.748 17:34:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59269 00:05:01.748 17:34:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:02.006 17:34:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59269 00:05:02.006 17:34:51 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 59269 ']' 00:05:02.006 17:34:51 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 59269 00:05:02.006 17:34:51 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:02.006 17:34:51 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:02.006 17:34:51 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59269 00:05:02.006 killing process with pid 59269 00:05:02.006 17:34:51 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:02.006 17:34:51 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:02.006 17:34:51 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59269' 00:05:02.006 17:34:51 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 59269 00:05:02.006 17:34:51 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 59269 00:05:03.380 17:34:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59269 00:05:03.380 17:34:52 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:03.380 17:34:52 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59269 00:05:03.380 17:34:52 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:03.380 17:34:52 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:03.380 17:34:52 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:03.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.380 17:34:52 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:03.380 17:34:52 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 59269 00:05:03.380 17:34:52 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 59269 ']' 00:05:03.380 17:34:52 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.380 17:34:52 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:03.380 17:34:52 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.380 17:34:52 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:03.380 ERROR: process (pid: 59269) is no longer running 00:05:03.380 17:34:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.380 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59269) - No such process 00:05:03.380 17:34:52 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:03.380 17:34:52 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:03.380 17:34:52 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:03.380 17:34:52 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:03.380 17:34:52 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:03.380 17:34:52 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:03.380 17:34:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:03.380 17:34:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:03.380 17:34:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:03.380 17:34:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:03.380 00:05:03.380 real 0m2.535s 00:05:03.380 user 0m2.494s 00:05:03.380 sys 0m0.460s 00:05:03.380 17:34:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.380 17:34:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.380 ************************************ 00:05:03.380 END TEST default_locks 00:05:03.380 ************************************ 00:05:03.380 17:34:53 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:03.380 17:34:53 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:03.380 17:34:53 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.380 17:34:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.380 ************************************ 00:05:03.380 START TEST default_locks_via_rpc 00:05:03.380 ************************************ 00:05:03.380 17:34:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:03.380 17:34:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59322 00:05:03.380 17:34:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59322 00:05:03.380 17:34:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59322 ']' 00:05:03.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.380 17:34:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.380 17:34:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:03.380 17:34:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.380 17:34:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:03.380 17:34:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:03.380 17:34:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.380 [2024-10-13 17:34:53.098342] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:03.380 [2024-10-13 17:34:53.098458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59322 ] 00:05:03.638 [2024-10-13 17:34:53.246958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.638 [2024-10-13 17:34:53.326871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.204 17:34:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:04.204 17:34:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:04.204 17:34:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:04.204 17:34:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.204 17:34:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.204 17:34:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.204 17:34:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:04.204 17:34:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:04.204 17:34:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:04.204 17:34:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:04.204 17:34:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:04.204 17:34:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.204 17:34:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.204 17:34:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.204 17:34:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59322 00:05:04.204 17:34:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59322 00:05:04.204 17:34:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:04.463 17:34:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59322 00:05:04.463 17:34:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 59322 ']' 00:05:04.463 17:34:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 59322 00:05:04.463 17:34:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:04.463 17:34:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:04.463 17:34:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59322 00:05:04.463 killing process with pid 59322 00:05:04.463 17:34:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:04.463 17:34:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:04.463 17:34:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59322' 00:05:04.463 17:34:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 59322 00:05:04.463 17:34:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 59322 00:05:05.838 00:05:05.838 real 0m2.328s 00:05:05.838 user 0m2.360s 00:05:05.838 sys 0m0.424s 00:05:05.838 17:34:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.838 ************************************ 00:05:05.838 END TEST default_locks_via_rpc 00:05:05.838 ************************************ 00:05:05.838 17:34:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.838 17:34:55 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:05.838 17:34:55 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.838 17:34:55 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.838 17:34:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.838 ************************************ 00:05:05.838 START TEST non_locking_app_on_locked_coremask 00:05:05.838 ************************************ 00:05:05.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.838 17:34:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:05.838 17:34:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59385 00:05:05.838 17:34:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59385 /var/tmp/spdk.sock 00:05:05.838 17:34:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59385 ']' 00:05:05.838 17:34:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.838 17:34:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:05.838 17:34:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.838 17:34:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:05.838 17:34:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:05.838 17:34:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:05.838 [2024-10-13 17:34:55.469755] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:05.838 [2024-10-13 17:34:55.470371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59385 ] 00:05:05.838 [2024-10-13 17:34:55.619788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.096 [2024-10-13 17:34:55.696207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.756 17:34:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:06.756 17:34:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:06.756 17:34:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:06.756 17:34:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59390 00:05:06.756 17:34:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59390 /var/tmp/spdk2.sock 00:05:06.756 17:34:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59390 ']' 00:05:06.756 17:34:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:06.756 17:34:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:06.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:06.756 17:34:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:06.756 17:34:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:06.756 17:34:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:06.756 [2024-10-13 17:34:56.373734] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:06.756 [2024-10-13 17:34:56.373848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59390 ] 00:05:06.756 [2024-10-13 17:34:56.522571] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:06.756 [2024-10-13 17:34:56.522608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.014 [2024-10-13 17:34:56.683414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.948 17:34:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:07.948 17:34:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:07.948 17:34:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59385 00:05:07.948 17:34:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59385 00:05:07.948 17:34:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:08.207 17:34:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59385 00:05:08.207 17:34:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59385 ']' 00:05:08.207 17:34:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59385 00:05:08.207 17:34:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:08.207 17:34:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:08.207 17:34:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59385 00:05:08.207 killing process with pid 59385 00:05:08.207 17:34:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:08.207 17:34:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:08.207 17:34:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59385' 00:05:08.207 17:34:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59385 00:05:08.207 17:34:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59385 00:05:10.735 17:35:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59390 00:05:10.735 17:35:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59390 ']' 00:05:10.735 17:35:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59390 00:05:10.735 17:35:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:10.735 17:35:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:10.735 17:35:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59390 00:05:10.735 killing process with pid 59390 00:05:10.735 17:35:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:10.735 17:35:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:10.735 17:35:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59390' 00:05:10.735 17:35:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59390 00:05:10.735 17:35:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59390 00:05:12.118 00:05:12.118 real 0m6.126s 00:05:12.118 user 0m6.431s 00:05:12.118 sys 0m0.786s 00:05:12.118 17:35:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.118 17:35:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.118 ************************************ 00:05:12.118 END TEST non_locking_app_on_locked_coremask 00:05:12.118 ************************************ 00:05:12.118 17:35:01 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:12.118 17:35:01 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.118 17:35:01 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.118 17:35:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.118 ************************************ 00:05:12.118 START TEST locking_app_on_unlocked_coremask 00:05:12.118 ************************************ 00:05:12.118 17:35:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:12.118 17:35:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59492 00:05:12.118 17:35:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59492 /var/tmp/spdk.sock 00:05:12.118 17:35:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59492 ']' 00:05:12.118 17:35:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.118 17:35:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:12.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.118 17:35:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.118 17:35:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:12.118 17:35:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:12.118 17:35:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.118 [2024-10-13 17:35:01.625425] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:12.118 [2024-10-13 17:35:01.625541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59492 ] 00:05:12.118 [2024-10-13 17:35:01.773916] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:12.118 [2024-10-13 17:35:01.773955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.118 [2024-10-13 17:35:01.853023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.684 17:35:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:12.684 17:35:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:12.684 17:35:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59502 00:05:12.684 17:35:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59502 /var/tmp/spdk2.sock 00:05:12.684 17:35:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:12.684 17:35:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59502 ']' 00:05:12.684 17:35:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:12.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:12.684 17:35:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:12.684 17:35:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:12.684 17:35:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:12.684 17:35:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.943 [2024-10-13 17:35:02.528450] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:12.943 [2024-10-13 17:35:02.528730] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59502 ] 00:05:12.943 [2024-10-13 17:35:02.676778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.201 [2024-10-13 17:35:02.843902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.135 17:35:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:14.135 17:35:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:14.135 17:35:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59502 00:05:14.135 17:35:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59502 00:05:14.135 17:35:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:14.394 17:35:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59492 00:05:14.394 17:35:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59492 ']' 00:05:14.394 17:35:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59492 00:05:14.394 17:35:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:14.394 17:35:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:14.394 17:35:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59492 00:05:14.394 killing process with pid 59492 00:05:14.394 17:35:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:14.394 17:35:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:14.394 17:35:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59492' 00:05:14.394 17:35:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59492 00:05:14.394 17:35:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59492 00:05:16.924 17:35:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59502 00:05:16.924 17:35:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59502 ']' 00:05:16.924 17:35:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59502 00:05:16.924 17:35:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:16.924 17:35:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:16.924 17:35:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59502 00:05:16.924 killing process with pid 59502 00:05:16.924 17:35:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:16.924 17:35:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:16.924 17:35:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59502' 00:05:16.924 17:35:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59502 00:05:16.924 17:35:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59502 00:05:18.316 00:05:18.316 real 0m6.386s 00:05:18.316 user 0m6.718s 00:05:18.316 sys 0m0.816s 00:05:18.316 ************************************ 00:05:18.316 END TEST locking_app_on_unlocked_coremask 00:05:18.316 ************************************ 00:05:18.317 17:35:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.317 17:35:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.317 17:35:07 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:18.317 17:35:07 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.317 17:35:07 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.317 17:35:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.317 ************************************ 00:05:18.317 START TEST locking_app_on_locked_coremask 00:05:18.317 ************************************ 00:05:18.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.317 17:35:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:18.317 17:35:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59599 00:05:18.317 17:35:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59599 /var/tmp/spdk.sock 00:05:18.317 17:35:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59599 ']' 00:05:18.317 17:35:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.317 17:35:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:18.317 17:35:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.317 17:35:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:18.317 17:35:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.317 17:35:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:18.317 [2024-10-13 17:35:08.059711] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:18.317 [2024-10-13 17:35:08.060395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59599 ] 00:05:18.577 [2024-10-13 17:35:08.210081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.577 [2024-10-13 17:35:08.308922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.144 17:35:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:19.144 17:35:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:19.144 17:35:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59615 00:05:19.144 17:35:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:19.144 17:35:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59615 /var/tmp/spdk2.sock 00:05:19.144 17:35:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:19.144 17:35:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59615 /var/tmp/spdk2.sock 00:05:19.144 17:35:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:19.144 17:35:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:19.144 17:35:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:19.144 17:35:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:19.144 17:35:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59615 /var/tmp/spdk2.sock 00:05:19.144 17:35:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59615 ']' 00:05:19.144 17:35:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:19.144 17:35:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:19.144 17:35:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:19.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:19.144 17:35:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:19.144 17:35:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.402 [2024-10-13 17:35:08.976392] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:19.402 [2024-10-13 17:35:08.976530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59615 ] 00:05:19.402 [2024-10-13 17:35:09.131006] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59599 has claimed it. 00:05:19.402 [2024-10-13 17:35:09.134585] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:19.969 ERROR: process (pid: 59615) is no longer running 00:05:19.969 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59615) - No such process 00:05:19.969 17:35:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:19.969 17:35:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:19.969 17:35:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:19.969 17:35:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:19.969 17:35:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:19.969 17:35:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:19.969 17:35:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59599 00:05:19.969 17:35:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59599 00:05:19.969 17:35:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:19.969 17:35:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59599 00:05:19.969 17:35:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59599 ']' 00:05:19.969 17:35:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59599 00:05:19.969 17:35:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:19.969 17:35:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:19.969 17:35:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59599 00:05:19.969 killing process with pid 59599 00:05:19.969 17:35:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:19.969 17:35:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:19.969 17:35:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59599' 00:05:19.969 17:35:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59599 00:05:19.969 17:35:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59599 00:05:21.867 00:05:21.867 real 0m3.220s 00:05:21.867 user 0m3.432s 00:05:21.867 sys 0m0.531s 00:05:21.867 17:35:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:21.867 17:35:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.867 ************************************ 00:05:21.867 END TEST locking_app_on_locked_coremask 00:05:21.867 ************************************ 00:05:21.867 17:35:11 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:21.867 17:35:11 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.867 17:35:11 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.867 17:35:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.867 ************************************ 00:05:21.867 START TEST locking_overlapped_coremask 00:05:21.867 ************************************ 00:05:21.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.867 17:35:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:21.867 17:35:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59668 00:05:21.867 17:35:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59668 /var/tmp/spdk.sock 00:05:21.867 17:35:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59668 ']' 00:05:21.867 17:35:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.867 17:35:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:21.867 17:35:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.867 17:35:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:21.867 17:35:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.867 17:35:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:21.867 [2024-10-13 17:35:11.328429] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:21.867 [2024-10-13 17:35:11.328601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59668 ] 00:05:21.867 [2024-10-13 17:35:11.476431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:21.867 [2024-10-13 17:35:11.571466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.867 [2024-10-13 17:35:11.571768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:21.867 [2024-10-13 17:35:11.573101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.431 17:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:22.431 17:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:22.431 17:35:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59686 00:05:22.431 17:35:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59686 /var/tmp/spdk2.sock 00:05:22.431 17:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:22.431 17:35:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:22.431 17:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59686 /var/tmp/spdk2.sock 00:05:22.431 17:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:22.431 17:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:22.431 17:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:22.431 17:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:22.431 17:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59686 /var/tmp/spdk2.sock 00:05:22.431 17:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59686 ']' 00:05:22.431 17:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:22.431 17:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:22.431 17:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:22.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:22.431 17:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:22.431 17:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.432 [2024-10-13 17:35:12.218542] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:22.432 [2024-10-13 17:35:12.218855] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59686 ] 00:05:22.690 [2024-10-13 17:35:12.374838] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59668 has claimed it. 00:05:22.690 [2024-10-13 17:35:12.374893] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:23.256 ERROR: process (pid: 59686) is no longer running 00:05:23.256 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59686) - No such process 00:05:23.256 17:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:23.256 17:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:23.256 17:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:23.256 17:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:23.256 17:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:23.256 17:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:23.256 17:35:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:23.256 17:35:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:23.256 17:35:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:23.256 17:35:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:23.256 17:35:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59668 00:05:23.256 17:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 59668 ']' 00:05:23.256 17:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 59668 00:05:23.256 17:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:23.256 17:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:23.256 17:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59668 00:05:23.256 killing process with pid 59668 00:05:23.256 17:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:23.256 17:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:23.256 17:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59668' 00:05:23.256 17:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 59668 00:05:23.256 17:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 59668 00:05:24.648 00:05:24.648 real 0m2.971s 00:05:24.648 user 0m7.981s 00:05:24.648 sys 0m0.473s 00:05:24.648 17:35:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:24.648 17:35:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.648 ************************************ 00:05:24.648 END TEST locking_overlapped_coremask 00:05:24.648 ************************************ 00:05:24.648 17:35:14 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:24.648 17:35:14 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:24.648 17:35:14 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:24.648 17:35:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.648 ************************************ 00:05:24.648 START TEST locking_overlapped_coremask_via_rpc 00:05:24.648 ************************************ 00:05:24.648 17:35:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:24.648 17:35:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59739 00:05:24.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.648 17:35:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59739 /var/tmp/spdk.sock 00:05:24.648 17:35:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:24.648 17:35:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59739 ']' 00:05:24.648 17:35:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.649 17:35:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:24.649 17:35:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.649 17:35:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:24.649 17:35:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.649 [2024-10-13 17:35:14.346967] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:24.649 [2024-10-13 17:35:14.347101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59739 ] 00:05:24.908 [2024-10-13 17:35:14.497481] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:24.908 [2024-10-13 17:35:14.497525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:24.908 [2024-10-13 17:35:14.604651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.908 [2024-10-13 17:35:14.604762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:24.908 [2024-10-13 17:35:14.605051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:25.473 17:35:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:25.473 17:35:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:25.473 17:35:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59757 00:05:25.473 17:35:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59757 /var/tmp/spdk2.sock 00:05:25.473 17:35:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59757 ']' 00:05:25.473 17:35:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:25.473 17:35:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:25.473 17:35:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:25.473 17:35:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:25.473 17:35:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:25.473 17:35:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.473 [2024-10-13 17:35:15.270543] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:25.473 [2024-10-13 17:35:15.270849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59757 ] 00:05:25.731 [2024-10-13 17:35:15.419740] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:25.731 [2024-10-13 17:35:15.419796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:25.989 [2024-10-13 17:35:15.594468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:25.989 [2024-10-13 17:35:15.597754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.989 [2024-10-13 17:35:15.597780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:26.923 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:26.923 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:26.923 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:26.923 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.923 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.923 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.923 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:26.923 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:26.923 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:26.923 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:26.923 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.923 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:26.923 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.923 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:26.923 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.923 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.923 [2024-10-13 17:35:16.548709] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59739 has claimed it. 00:05:26.923 request: 00:05:26.923 { 00:05:26.923 "method": "framework_enable_cpumask_locks", 00:05:26.923 "req_id": 1 00:05:26.923 } 00:05:26.923 Got JSON-RPC error response 00:05:26.923 response: 00:05:26.923 { 00:05:26.923 "code": -32603, 00:05:26.923 "message": "Failed to claim CPU core: 2" 00:05:26.923 } 00:05:26.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.923 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:26.923 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:26.923 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:26.923 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:26.923 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:26.923 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59739 /var/tmp/spdk.sock 00:05:26.923 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59739 ']' 00:05:26.923 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.923 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:26.923 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.923 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:26.923 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:27.181 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:27.181 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:27.181 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59757 /var/tmp/spdk2.sock 00:05:27.181 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59757 ']' 00:05:27.181 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:27.181 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:27.181 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:27.181 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:27.181 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.181 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:27.181 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:27.181 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:27.181 ************************************ 00:05:27.181 END TEST locking_overlapped_coremask_via_rpc 00:05:27.181 ************************************ 00:05:27.181 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:27.181 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:27.181 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:27.181 00:05:27.181 real 0m2.706s 00:05:27.181 user 0m1.062s 00:05:27.181 sys 0m0.142s 00:05:27.181 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.181 17:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.440 17:35:17 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:27.440 17:35:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59739 ]] 00:05:27.440 17:35:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59739 00:05:27.440 17:35:17 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59739 ']' 00:05:27.440 17:35:17 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59739 00:05:27.440 17:35:17 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:27.440 17:35:17 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:27.440 17:35:17 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59739 00:05:27.440 17:35:17 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:27.440 17:35:17 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:27.440 17:35:17 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59739' 00:05:27.440 killing process with pid 59739 00:05:27.440 17:35:17 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59739 00:05:27.440 17:35:17 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59739 00:05:28.815 17:35:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59757 ]] 00:05:28.815 17:35:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59757 00:05:28.815 17:35:18 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59757 ']' 00:05:28.815 17:35:18 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59757 00:05:28.815 17:35:18 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:28.815 17:35:18 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:28.815 17:35:18 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59757 00:05:28.815 killing process with pid 59757 00:05:28.815 17:35:18 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:28.815 17:35:18 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:28.815 17:35:18 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59757' 00:05:28.815 17:35:18 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59757 00:05:28.815 17:35:18 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59757 00:05:29.748 17:35:19 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:29.748 Process with pid 59739 is not found 00:05:29.748 17:35:19 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:29.748 17:35:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59739 ]] 00:05:29.748 17:35:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59739 00:05:29.748 17:35:19 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59739 ']' 00:05:29.748 17:35:19 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59739 00:05:29.748 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59739) - No such process 00:05:29.748 17:35:19 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59739 is not found' 00:05:29.748 17:35:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59757 ]] 00:05:29.748 17:35:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59757 00:05:29.748 17:35:19 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59757 ']' 00:05:29.748 Process with pid 59757 is not found 00:05:29.748 17:35:19 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59757 00:05:29.748 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59757) - No such process 00:05:29.748 17:35:19 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59757 is not found' 00:05:29.748 17:35:19 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:29.748 ************************************ 00:05:29.748 END TEST cpu_locks 00:05:29.748 ************************************ 00:05:29.748 00:05:29.748 real 0m29.157s 00:05:29.748 user 0m49.600s 00:05:29.748 sys 0m4.465s 00:05:29.748 17:35:19 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.748 17:35:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.748 ************************************ 00:05:29.748 END TEST event 00:05:29.748 ************************************ 00:05:29.748 00:05:29.748 real 0m54.890s 00:05:29.748 user 1m41.514s 00:05:29.748 sys 0m7.522s 00:05:29.748 17:35:19 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.748 17:35:19 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.748 17:35:19 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:29.748 17:35:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.748 17:35:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.748 17:35:19 -- common/autotest_common.sh@10 -- # set +x 00:05:29.748 ************************************ 00:05:29.748 START TEST thread 00:05:29.748 ************************************ 00:05:29.748 17:35:19 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:30.006 * Looking for test storage... 00:05:30.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:30.006 17:35:19 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:30.006 17:35:19 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:30.006 17:35:19 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:05:30.006 17:35:19 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:30.006 17:35:19 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.006 17:35:19 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.006 17:35:19 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.006 17:35:19 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.006 17:35:19 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.006 17:35:19 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.006 17:35:19 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.006 17:35:19 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.006 17:35:19 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.006 17:35:19 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.006 17:35:19 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.006 17:35:19 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:30.006 17:35:19 thread -- scripts/common.sh@345 -- # : 1 00:05:30.006 17:35:19 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.006 17:35:19 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.006 17:35:19 thread -- scripts/common.sh@365 -- # decimal 1 00:05:30.006 17:35:19 thread -- scripts/common.sh@353 -- # local d=1 00:05:30.006 17:35:19 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.006 17:35:19 thread -- scripts/common.sh@355 -- # echo 1 00:05:30.006 17:35:19 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.006 17:35:19 thread -- scripts/common.sh@366 -- # decimal 2 00:05:30.006 17:35:19 thread -- scripts/common.sh@353 -- # local d=2 00:05:30.006 17:35:19 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.006 17:35:19 thread -- scripts/common.sh@355 -- # echo 2 00:05:30.006 17:35:19 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.006 17:35:19 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.006 17:35:19 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.006 17:35:19 thread -- scripts/common.sh@368 -- # return 0 00:05:30.006 17:35:19 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.006 17:35:19 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:30.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.006 --rc genhtml_branch_coverage=1 00:05:30.006 --rc genhtml_function_coverage=1 00:05:30.006 --rc genhtml_legend=1 00:05:30.006 --rc geninfo_all_blocks=1 00:05:30.006 --rc geninfo_unexecuted_blocks=1 00:05:30.006 00:05:30.006 ' 00:05:30.006 17:35:19 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:30.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.006 --rc genhtml_branch_coverage=1 00:05:30.006 --rc genhtml_function_coverage=1 00:05:30.006 --rc genhtml_legend=1 00:05:30.006 --rc geninfo_all_blocks=1 00:05:30.006 --rc geninfo_unexecuted_blocks=1 00:05:30.006 00:05:30.006 ' 00:05:30.006 17:35:19 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:30.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.006 --rc genhtml_branch_coverage=1 00:05:30.007 --rc genhtml_function_coverage=1 00:05:30.007 --rc genhtml_legend=1 00:05:30.007 --rc geninfo_all_blocks=1 00:05:30.007 --rc geninfo_unexecuted_blocks=1 00:05:30.007 00:05:30.007 ' 00:05:30.007 17:35:19 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:30.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.007 --rc genhtml_branch_coverage=1 00:05:30.007 --rc genhtml_function_coverage=1 00:05:30.007 --rc genhtml_legend=1 00:05:30.007 --rc geninfo_all_blocks=1 00:05:30.007 --rc geninfo_unexecuted_blocks=1 00:05:30.007 00:05:30.007 ' 00:05:30.007 17:35:19 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:30.007 17:35:19 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:30.007 17:35:19 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.007 17:35:19 thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.007 ************************************ 00:05:30.007 START TEST thread_poller_perf 00:05:30.007 ************************************ 00:05:30.007 17:35:19 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:30.007 [2024-10-13 17:35:19.688141] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:30.007 [2024-10-13 17:35:19.688229] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59906 ] 00:05:30.265 [2024-10-13 17:35:19.829492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.265 [2024-10-13 17:35:19.908136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.265 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:31.638 [2024-10-13T17:35:21.452Z] ====================================== 00:05:31.638 [2024-10-13T17:35:21.452Z] busy:2611263486 (cyc) 00:05:31.638 [2024-10-13T17:35:21.452Z] total_run_count: 401000 00:05:31.638 [2024-10-13T17:35:21.452Z] tsc_hz: 2600000000 (cyc) 00:05:31.638 [2024-10-13T17:35:21.452Z] ====================================== 00:05:31.638 [2024-10-13T17:35:21.452Z] poller_cost: 6511 (cyc), 2504 (nsec) 00:05:31.638 ************************************ 00:05:31.638 END TEST thread_poller_perf 00:05:31.638 ************************************ 00:05:31.638 00:05:31.638 real 0m1.377s 00:05:31.638 user 0m1.201s 00:05:31.638 sys 0m0.068s 00:05:31.638 17:35:21 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.638 17:35:21 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:31.638 17:35:21 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:31.638 17:35:21 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:31.638 17:35:21 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.638 17:35:21 thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.638 ************************************ 00:05:31.638 START TEST thread_poller_perf 00:05:31.638 ************************************ 00:05:31.638 17:35:21 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:31.638 [2024-10-13 17:35:21.113141] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:31.638 [2024-10-13 17:35:21.113365] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59943 ] 00:05:31.638 [2024-10-13 17:35:21.260932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.638 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:31.638 [2024-10-13 17:35:21.340782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.011 [2024-10-13T17:35:22.825Z] ====================================== 00:05:33.011 [2024-10-13T17:35:22.826Z] busy:2602809498 (cyc) 00:05:33.012 [2024-10-13T17:35:22.826Z] total_run_count: 5290000 00:05:33.012 [2024-10-13T17:35:22.826Z] tsc_hz: 2600000000 (cyc) 00:05:33.012 [2024-10-13T17:35:22.826Z] ====================================== 00:05:33.012 [2024-10-13T17:35:22.826Z] poller_cost: 492 (cyc), 189 (nsec) 00:05:33.012 00:05:33.012 real 0m1.384s 00:05:33.012 user 0m1.206s 00:05:33.012 sys 0m0.071s 00:05:33.012 ************************************ 00:05:33.012 END TEST thread_poller_perf 00:05:33.012 ************************************ 00:05:33.012 17:35:22 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.012 17:35:22 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:33.012 17:35:22 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:33.012 ************************************ 00:05:33.012 END TEST thread 00:05:33.012 ************************************ 00:05:33.012 00:05:33.012 real 0m2.989s 00:05:33.012 user 0m2.528s 00:05:33.012 sys 0m0.251s 00:05:33.012 17:35:22 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.012 17:35:22 thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.012 17:35:22 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:33.012 17:35:22 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:33.012 17:35:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.012 17:35:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.012 17:35:22 -- common/autotest_common.sh@10 -- # set +x 00:05:33.012 ************************************ 00:05:33.012 START TEST app_cmdline 00:05:33.012 ************************************ 00:05:33.012 17:35:22 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:33.012 * Looking for test storage... 00:05:33.012 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:33.012 17:35:22 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:33.012 17:35:22 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:05:33.012 17:35:22 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:33.012 17:35:22 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:33.012 17:35:22 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.012 17:35:22 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.012 17:35:22 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.012 17:35:22 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.012 17:35:22 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.012 17:35:22 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.012 17:35:22 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.012 17:35:22 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.012 17:35:22 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.012 17:35:22 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.012 17:35:22 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.012 17:35:22 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:33.012 17:35:22 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:33.012 17:35:22 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.012 17:35:22 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.012 17:35:22 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:33.012 17:35:22 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:33.012 17:35:22 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.012 17:35:22 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:33.012 17:35:22 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.012 17:35:22 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:33.012 17:35:22 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:33.012 17:35:22 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.012 17:35:22 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:33.012 17:35:22 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.012 17:35:22 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.012 17:35:22 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.012 17:35:22 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:33.012 17:35:22 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.012 17:35:22 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:33.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.012 --rc genhtml_branch_coverage=1 00:05:33.012 --rc genhtml_function_coverage=1 00:05:33.012 --rc genhtml_legend=1 00:05:33.012 --rc geninfo_all_blocks=1 00:05:33.012 --rc geninfo_unexecuted_blocks=1 00:05:33.012 00:05:33.012 ' 00:05:33.012 17:35:22 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:33.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.012 --rc genhtml_branch_coverage=1 00:05:33.012 --rc genhtml_function_coverage=1 00:05:33.012 --rc genhtml_legend=1 00:05:33.012 --rc geninfo_all_blocks=1 00:05:33.012 --rc geninfo_unexecuted_blocks=1 00:05:33.012 00:05:33.012 ' 00:05:33.012 17:35:22 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:33.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.012 --rc genhtml_branch_coverage=1 00:05:33.012 --rc genhtml_function_coverage=1 00:05:33.012 --rc genhtml_legend=1 00:05:33.012 --rc geninfo_all_blocks=1 00:05:33.012 --rc geninfo_unexecuted_blocks=1 00:05:33.012 00:05:33.012 ' 00:05:33.012 17:35:22 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:33.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.012 --rc genhtml_branch_coverage=1 00:05:33.012 --rc genhtml_function_coverage=1 00:05:33.012 --rc genhtml_legend=1 00:05:33.012 --rc geninfo_all_blocks=1 00:05:33.012 --rc geninfo_unexecuted_blocks=1 00:05:33.012 00:05:33.012 ' 00:05:33.012 17:35:22 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:33.012 17:35:22 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60026 00:05:33.012 17:35:22 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:33.012 17:35:22 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60026 00:05:33.012 17:35:22 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 60026 ']' 00:05:33.012 17:35:22 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.012 17:35:22 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:33.012 17:35:22 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.012 17:35:22 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:33.012 17:35:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:33.012 [2024-10-13 17:35:22.728496] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:33.012 [2024-10-13 17:35:22.728678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60026 ] 00:05:33.271 [2024-10-13 17:35:22.868553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.271 [2024-10-13 17:35:22.948970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.837 17:35:23 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:33.837 17:35:23 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:05:33.837 17:35:23 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:34.095 { 00:05:34.095 "version": "SPDK v25.01-pre git sha1 bbce7a874", 00:05:34.095 "fields": { 00:05:34.095 "major": 25, 00:05:34.095 "minor": 1, 00:05:34.095 "patch": 0, 00:05:34.095 "suffix": "-pre", 00:05:34.095 "commit": "bbce7a874" 00:05:34.095 } 00:05:34.095 } 00:05:34.095 17:35:23 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:34.095 17:35:23 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:34.095 17:35:23 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:34.095 17:35:23 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:34.095 17:35:23 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:34.095 17:35:23 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:34.095 17:35:23 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:34.095 17:35:23 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.095 17:35:23 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:34.095 17:35:23 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.095 17:35:23 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:34.095 17:35:23 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:34.095 17:35:23 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:34.095 17:35:23 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:34.095 17:35:23 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:34.095 17:35:23 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:34.095 17:35:23 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:34.095 17:35:23 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:34.095 17:35:23 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:34.095 17:35:23 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:34.095 17:35:23 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:34.095 17:35:23 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:34.095 17:35:23 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:34.095 17:35:23 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:34.353 request: 00:05:34.353 { 00:05:34.353 "method": "env_dpdk_get_mem_stats", 00:05:34.353 "req_id": 1 00:05:34.353 } 00:05:34.353 Got JSON-RPC error response 00:05:34.353 response: 00:05:34.353 { 00:05:34.353 "code": -32601, 00:05:34.353 "message": "Method not found" 00:05:34.353 } 00:05:34.353 17:35:24 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:34.353 17:35:24 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:34.353 17:35:24 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:34.353 17:35:24 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:34.353 17:35:24 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60026 00:05:34.353 17:35:24 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 60026 ']' 00:05:34.353 17:35:24 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 60026 00:05:34.353 17:35:24 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:05:34.353 17:35:24 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:34.353 17:35:24 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60026 00:05:34.353 killing process with pid 60026 00:05:34.353 17:35:24 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:34.353 17:35:24 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:34.353 17:35:24 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60026' 00:05:34.353 17:35:24 app_cmdline -- common/autotest_common.sh@969 -- # kill 60026 00:05:34.353 17:35:24 app_cmdline -- common/autotest_common.sh@974 -- # wait 60026 00:05:35.768 ************************************ 00:05:35.768 END TEST app_cmdline 00:05:35.768 ************************************ 00:05:35.768 00:05:35.768 real 0m2.676s 00:05:35.768 user 0m3.007s 00:05:35.768 sys 0m0.404s 00:05:35.768 17:35:25 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.768 17:35:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:35.768 17:35:25 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:35.768 17:35:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.768 17:35:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.768 17:35:25 -- common/autotest_common.sh@10 -- # set +x 00:05:35.768 ************************************ 00:05:35.768 START TEST version 00:05:35.768 ************************************ 00:05:35.768 17:35:25 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:35.768 * Looking for test storage... 00:05:35.768 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:35.768 17:35:25 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:35.768 17:35:25 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:35.768 17:35:25 version -- common/autotest_common.sh@1691 -- # lcov --version 00:05:35.768 17:35:25 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:35.768 17:35:25 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.768 17:35:25 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.768 17:35:25 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.768 17:35:25 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.768 17:35:25 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.768 17:35:25 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.768 17:35:25 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.768 17:35:25 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.768 17:35:25 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.768 17:35:25 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.768 17:35:25 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.768 17:35:25 version -- scripts/common.sh@344 -- # case "$op" in 00:05:35.768 17:35:25 version -- scripts/common.sh@345 -- # : 1 00:05:35.768 17:35:25 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.768 17:35:25 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.768 17:35:25 version -- scripts/common.sh@365 -- # decimal 1 00:05:35.768 17:35:25 version -- scripts/common.sh@353 -- # local d=1 00:05:35.768 17:35:25 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.768 17:35:25 version -- scripts/common.sh@355 -- # echo 1 00:05:35.768 17:35:25 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.768 17:35:25 version -- scripts/common.sh@366 -- # decimal 2 00:05:35.768 17:35:25 version -- scripts/common.sh@353 -- # local d=2 00:05:35.768 17:35:25 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.768 17:35:25 version -- scripts/common.sh@355 -- # echo 2 00:05:35.768 17:35:25 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.768 17:35:25 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.768 17:35:25 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.768 17:35:25 version -- scripts/common.sh@368 -- # return 0 00:05:35.768 17:35:25 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.768 17:35:25 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:35.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.768 --rc genhtml_branch_coverage=1 00:05:35.768 --rc genhtml_function_coverage=1 00:05:35.768 --rc genhtml_legend=1 00:05:35.768 --rc geninfo_all_blocks=1 00:05:35.768 --rc geninfo_unexecuted_blocks=1 00:05:35.768 00:05:35.768 ' 00:05:35.768 17:35:25 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:35.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.768 --rc genhtml_branch_coverage=1 00:05:35.768 --rc genhtml_function_coverage=1 00:05:35.768 --rc genhtml_legend=1 00:05:35.769 --rc geninfo_all_blocks=1 00:05:35.769 --rc geninfo_unexecuted_blocks=1 00:05:35.769 00:05:35.769 ' 00:05:35.769 17:35:25 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:35.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.769 --rc genhtml_branch_coverage=1 00:05:35.769 --rc genhtml_function_coverage=1 00:05:35.769 --rc genhtml_legend=1 00:05:35.769 --rc geninfo_all_blocks=1 00:05:35.769 --rc geninfo_unexecuted_blocks=1 00:05:35.769 00:05:35.769 ' 00:05:35.769 17:35:25 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:35.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.769 --rc genhtml_branch_coverage=1 00:05:35.769 --rc genhtml_function_coverage=1 00:05:35.769 --rc genhtml_legend=1 00:05:35.769 --rc geninfo_all_blocks=1 00:05:35.769 --rc geninfo_unexecuted_blocks=1 00:05:35.769 00:05:35.769 ' 00:05:35.769 17:35:25 version -- app/version.sh@17 -- # get_header_version major 00:05:35.769 17:35:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:35.769 17:35:25 version -- app/version.sh@14 -- # cut -f2 00:05:35.769 17:35:25 version -- app/version.sh@14 -- # tr -d '"' 00:05:35.769 17:35:25 version -- app/version.sh@17 -- # major=25 00:05:35.769 17:35:25 version -- app/version.sh@18 -- # get_header_version minor 00:05:35.769 17:35:25 version -- app/version.sh@14 -- # tr -d '"' 00:05:35.769 17:35:25 version -- app/version.sh@14 -- # cut -f2 00:05:35.769 17:35:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:35.769 17:35:25 version -- app/version.sh@18 -- # minor=1 00:05:35.769 17:35:25 version -- app/version.sh@19 -- # get_header_version patch 00:05:35.769 17:35:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:35.769 17:35:25 version -- app/version.sh@14 -- # cut -f2 00:05:35.769 17:35:25 version -- app/version.sh@14 -- # tr -d '"' 00:05:35.769 17:35:25 version -- app/version.sh@19 -- # patch=0 00:05:35.769 17:35:25 version -- app/version.sh@20 -- # get_header_version suffix 00:05:35.769 17:35:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:35.769 17:35:25 version -- app/version.sh@14 -- # cut -f2 00:05:35.769 17:35:25 version -- app/version.sh@14 -- # tr -d '"' 00:05:35.769 17:35:25 version -- app/version.sh@20 -- # suffix=-pre 00:05:35.769 17:35:25 version -- app/version.sh@22 -- # version=25.1 00:05:35.769 17:35:25 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:35.769 17:35:25 version -- app/version.sh@28 -- # version=25.1rc0 00:05:35.769 17:35:25 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:35.769 17:35:25 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:35.769 17:35:25 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:35.769 17:35:25 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:35.769 ************************************ 00:05:35.769 END TEST version 00:05:35.769 ************************************ 00:05:35.769 00:05:35.769 real 0m0.182s 00:05:35.769 user 0m0.112s 00:05:35.769 sys 0m0.099s 00:05:35.769 17:35:25 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.769 17:35:25 version -- common/autotest_common.sh@10 -- # set +x 00:05:35.769 17:35:25 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:35.769 17:35:25 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:35.769 17:35:25 -- spdk/autotest.sh@194 -- # uname -s 00:05:35.769 17:35:25 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:35.769 17:35:25 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:35.769 17:35:25 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:35.769 17:35:25 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:05:35.769 17:35:25 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:35.769 17:35:25 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:35.769 17:35:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.769 17:35:25 -- common/autotest_common.sh@10 -- # set +x 00:05:35.769 ************************************ 00:05:35.769 START TEST blockdev_nvme 00:05:35.769 ************************************ 00:05:35.769 17:35:25 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:35.769 * Looking for test storage... 00:05:35.769 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:35.769 17:35:25 blockdev_nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:35.769 17:35:25 blockdev_nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:05:35.769 17:35:25 blockdev_nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:36.027 17:35:25 blockdev_nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:36.027 17:35:25 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.027 17:35:25 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.027 17:35:25 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.027 17:35:25 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.027 17:35:25 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.027 17:35:25 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.027 17:35:25 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.027 17:35:25 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.027 17:35:25 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.027 17:35:25 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.027 17:35:25 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.027 17:35:25 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:05:36.027 17:35:25 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:05:36.027 17:35:25 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.027 17:35:25 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.027 17:35:25 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:05:36.027 17:35:25 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:05:36.027 17:35:25 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.027 17:35:25 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:05:36.027 17:35:25 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.027 17:35:25 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:05:36.027 17:35:25 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:05:36.027 17:35:25 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.027 17:35:25 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:05:36.027 17:35:25 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.027 17:35:25 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.027 17:35:25 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.027 17:35:25 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:05:36.027 17:35:25 blockdev_nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.027 17:35:25 blockdev_nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:36.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.027 --rc genhtml_branch_coverage=1 00:05:36.027 --rc genhtml_function_coverage=1 00:05:36.027 --rc genhtml_legend=1 00:05:36.027 --rc geninfo_all_blocks=1 00:05:36.027 --rc geninfo_unexecuted_blocks=1 00:05:36.027 00:05:36.027 ' 00:05:36.027 17:35:25 blockdev_nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:36.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.027 --rc genhtml_branch_coverage=1 00:05:36.027 --rc genhtml_function_coverage=1 00:05:36.027 --rc genhtml_legend=1 00:05:36.027 --rc geninfo_all_blocks=1 00:05:36.027 --rc geninfo_unexecuted_blocks=1 00:05:36.027 00:05:36.027 ' 00:05:36.027 17:35:25 blockdev_nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:36.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.027 --rc genhtml_branch_coverage=1 00:05:36.027 --rc genhtml_function_coverage=1 00:05:36.027 --rc genhtml_legend=1 00:05:36.027 --rc geninfo_all_blocks=1 00:05:36.027 --rc geninfo_unexecuted_blocks=1 00:05:36.027 00:05:36.027 ' 00:05:36.027 17:35:25 blockdev_nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:36.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.027 --rc genhtml_branch_coverage=1 00:05:36.027 --rc genhtml_function_coverage=1 00:05:36.027 --rc genhtml_legend=1 00:05:36.027 --rc geninfo_all_blocks=1 00:05:36.027 --rc geninfo_unexecuted_blocks=1 00:05:36.027 00:05:36.027 ' 00:05:36.027 17:35:25 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:36.027 17:35:25 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:05:36.027 17:35:25 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:05:36.027 17:35:25 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:36.027 17:35:25 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:05:36.027 17:35:25 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:05:36.027 17:35:25 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:05:36.027 17:35:25 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:05:36.027 17:35:25 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:05:36.027 17:35:25 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:05:36.027 17:35:25 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:05:36.027 17:35:25 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:05:36.027 17:35:25 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:05:36.027 17:35:25 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:05:36.027 17:35:25 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:05:36.027 17:35:25 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:05:36.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.027 17:35:25 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:05:36.027 17:35:25 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:05:36.027 17:35:25 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:05:36.028 17:35:25 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:05:36.028 17:35:25 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:05:36.028 17:35:25 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:05:36.028 17:35:25 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:05:36.028 17:35:25 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:05:36.028 17:35:25 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60198 00:05:36.028 17:35:25 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:05:36.028 17:35:25 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 60198 00:05:36.028 17:35:25 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 60198 ']' 00:05:36.028 17:35:25 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.028 17:35:25 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:36.028 17:35:25 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.028 17:35:25 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:36.028 17:35:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:36.028 17:35:25 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:05:36.028 [2024-10-13 17:35:25.696105] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:36.028 [2024-10-13 17:35:25.696232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60198 ] 00:05:36.286 [2024-10-13 17:35:25.849717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.286 [2024-10-13 17:35:25.944689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.852 17:35:26 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:36.852 17:35:26 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:05:36.852 17:35:26 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:05:36.852 17:35:26 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:05:36.852 17:35:26 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:05:36.852 17:35:26 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:05:36.852 17:35:26 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:36.852 17:35:26 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:05:36.852 17:35:26 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.852 17:35:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:37.110 17:35:26 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.110 17:35:26 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:05:37.110 17:35:26 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.110 17:35:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:37.110 17:35:26 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.110 17:35:26 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:05:37.110 17:35:26 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:05:37.110 17:35:26 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.110 17:35:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:37.110 17:35:26 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.110 17:35:26 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:05:37.110 17:35:26 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.110 17:35:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:37.110 17:35:26 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.371 17:35:26 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:05:37.371 17:35:26 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.371 17:35:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:37.371 17:35:26 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.371 17:35:26 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:05:37.371 17:35:26 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:05:37.371 17:35:26 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:05:37.371 17:35:26 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.371 17:35:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:37.371 17:35:26 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.371 17:35:26 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:05:37.371 17:35:26 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:05:37.372 17:35:26 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "80ff0945-bb98-4033-ac58-2edd7bd534b3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "80ff0945-bb98-4033-ac58-2edd7bd534b3",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "2c8b1725-918d-4630-b853-d4dbee8c2146"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "2c8b1725-918d-4630-b853-d4dbee8c2146",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "a9d3b778-8e83-4c4a-b079-45859fcaedcd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a9d3b778-8e83-4c4a-b079-45859fcaedcd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "e9468388-6261-4558-b710-099be329eb9e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e9468388-6261-4558-b710-099be329eb9e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "6e6a876a-287d-4d8a-a418-b52f49ee4ec9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6e6a876a-287d-4d8a-a418-b52f49ee4ec9",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "14930e09-8b46-4827-adff-2289fbaa76b2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "14930e09-8b46-4827-adff-2289fbaa76b2",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:05:37.372 17:35:27 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:05:37.372 17:35:27 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:05:37.372 17:35:27 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:05:37.372 17:35:27 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 60198 00:05:37.372 17:35:27 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 60198 ']' 00:05:37.372 17:35:27 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 60198 00:05:37.372 17:35:27 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:05:37.372 17:35:27 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:37.372 17:35:27 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60198 00:05:37.372 17:35:27 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:37.372 17:35:27 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:37.372 killing process with pid 60198 00:05:37.372 17:35:27 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60198' 00:05:37.372 17:35:27 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 60198 00:05:37.372 17:35:27 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 60198 00:05:38.745 17:35:28 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:38.745 17:35:28 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:38.745 17:35:28 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:05:38.745 17:35:28 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.745 17:35:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:38.745 ************************************ 00:05:38.745 START TEST bdev_hello_world 00:05:38.745 ************************************ 00:05:38.745 17:35:28 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:38.745 [2024-10-13 17:35:28.331629] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:38.745 [2024-10-13 17:35:28.331745] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60277 ] 00:05:38.745 [2024-10-13 17:35:28.478159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.002 [2024-10-13 17:35:28.558237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.261 [2024-10-13 17:35:29.048570] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:05:39.261 [2024-10-13 17:35:29.048618] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:05:39.261 [2024-10-13 17:35:29.048634] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:05:39.261 [2024-10-13 17:35:29.050586] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:05:39.261 [2024-10-13 17:35:29.051137] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:05:39.261 [2024-10-13 17:35:29.051164] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:05:39.261 [2024-10-13 17:35:29.051318] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:05:39.261 00:05:39.261 [2024-10-13 17:35:29.051335] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:05:39.827 00:05:39.827 real 0m1.340s 00:05:39.827 user 0m1.081s 00:05:39.827 sys 0m0.154s 00:05:39.827 ************************************ 00:05:39.827 END TEST bdev_hello_world 00:05:39.827 ************************************ 00:05:39.827 17:35:29 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.827 17:35:29 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:05:40.092 17:35:29 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:05:40.092 17:35:29 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:40.092 17:35:29 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.092 17:35:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:40.092 ************************************ 00:05:40.092 START TEST bdev_bounds 00:05:40.092 ************************************ 00:05:40.092 17:35:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:05:40.092 Process bdevio pid: 60313 00:05:40.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.092 17:35:29 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=60313 00:05:40.092 17:35:29 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.092 17:35:29 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 60313' 00:05:40.092 17:35:29 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 60313 00:05:40.092 17:35:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 60313 ']' 00:05:40.092 17:35:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.092 17:35:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:40.092 17:35:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.092 17:35:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:40.092 17:35:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:40.092 17:35:29 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:40.092 [2024-10-13 17:35:29.712063] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:40.092 [2024-10-13 17:35:29.712179] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60313 ] 00:05:40.092 [2024-10-13 17:35:29.860667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:40.349 [2024-10-13 17:35:29.944285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.349 [2024-10-13 17:35:29.944662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.349 [2024-10-13 17:35:29.944685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:40.914 17:35:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:40.914 17:35:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:05:40.914 17:35:30 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:05:40.914 I/O targets: 00:05:40.914 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:05:40.914 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:05:40.914 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:40.914 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:40.914 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:40.914 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:05:40.914 00:05:40.914 00:05:40.914 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.914 http://cunit.sourceforge.net/ 00:05:40.914 00:05:40.914 00:05:40.914 Suite: bdevio tests on: Nvme3n1 00:05:40.914 Test: blockdev write read block ...passed 00:05:40.914 Test: blockdev write zeroes read block ...passed 00:05:40.914 Test: blockdev write zeroes read no split ...passed 00:05:40.914 Test: blockdev write zeroes read split ...passed 00:05:40.914 Test: blockdev write zeroes read split partial ...passed 00:05:40.914 Test: blockdev reset ...[2024-10-13 17:35:30.691169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:05:40.914 [2024-10-13 17:35:30.694143] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:05:40.914 passed 00:05:40.914 Test: blockdev write read 8 blocks ...passed 00:05:40.914 Test: blockdev write read size > 128k ...passed 00:05:40.914 Test: blockdev write read invalid size ...passed 00:05:40.914 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:40.914 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:40.914 Test: blockdev write read max offset ...passed 00:05:40.914 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:40.914 Test: blockdev writev readv 8 blocks ...passed 00:05:40.914 Test: blockdev writev readv 30 x 1block ...passed 00:05:40.914 Test: blockdev writev readv block ...passed 00:05:40.914 Test: blockdev writev readv size > 128k ...passed 00:05:40.914 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:40.914 Test: blockdev comparev and writev ...[2024-10-13 17:35:30.701208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ac40a000 len:0x1000 00:05:40.915 [2024-10-13 17:35:30.701254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:40.915 passed 00:05:40.915 Test: blockdev nvme passthru rw ...passed 00:05:40.915 Test: blockdev nvme passthru vendor specific ...[2024-10-13 17:35:30.701853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:40.915 passed 00:05:40.915 Test: blockdev nvme admin passthru ...[2024-10-13 17:35:30.701880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:40.915 passed 00:05:40.915 Test: blockdev copy ...passed 00:05:40.915 Suite: bdevio tests on: Nvme2n3 00:05:40.915 Test: blockdev write read block ...passed 00:05:40.915 Test: blockdev write zeroes read block ...passed 00:05:40.915 Test: blockdev write zeroes read no split ...passed 00:05:41.173 Test: blockdev write zeroes read split ...passed 00:05:41.173 Test: blockdev write zeroes read split partial ...passed 00:05:41.173 Test: blockdev reset ...[2024-10-13 17:35:30.760240] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:05:41.173 [2024-10-13 17:35:30.763320] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:05:41.173 passed 00:05:41.173 Test: blockdev write read 8 blocks ...passed 00:05:41.173 Test: blockdev write read size > 128k ...passed 00:05:41.173 Test: blockdev write read invalid size ...passed 00:05:41.173 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:41.173 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:41.173 Test: blockdev write read max offset ...passed 00:05:41.173 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:41.173 Test: blockdev writev readv 8 blocks ...passed 00:05:41.173 Test: blockdev writev readv 30 x 1block ...passed 00:05:41.173 Test: blockdev writev readv block ...passed 00:05:41.173 Test: blockdev writev readv size > 128k ...passed 00:05:41.173 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:41.173 Test: blockdev comparev and writev ...[2024-10-13 17:35:30.769836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28fe06000 len:0x1000 00:05:41.173 [2024-10-13 17:35:30.769873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:41.173 passed 00:05:41.173 Test: blockdev nvme passthru rw ...passed 00:05:41.173 Test: blockdev nvme passthru vendor specific ...passed 00:05:41.173 Test: blockdev nvme admin passthru ...[2024-10-13 17:35:30.770415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:41.173 [2024-10-13 17:35:30.770437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:41.173 passed 00:05:41.173 Test: blockdev copy ...passed 00:05:41.173 Suite: bdevio tests on: Nvme2n2 00:05:41.173 Test: blockdev write read block ...passed 00:05:41.173 Test: blockdev write zeroes read block ...passed 00:05:41.173 Test: blockdev write zeroes read no split ...passed 00:05:41.173 Test: blockdev write zeroes read split ...passed 00:05:41.173 Test: blockdev write zeroes read split partial ...passed 00:05:41.173 Test: blockdev reset ...[2024-10-13 17:35:30.813941] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:05:41.173 [2024-10-13 17:35:30.816872] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:05:41.173 passed 00:05:41.173 Test: blockdev write read 8 blocks ...passed 00:05:41.173 Test: blockdev write read size > 128k ...passed 00:05:41.173 Test: blockdev write read invalid size ...passed 00:05:41.173 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:41.173 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:41.173 Test: blockdev write read max offset ...passed 00:05:41.173 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:41.173 Test: blockdev writev readv 8 blocks ...passed 00:05:41.173 Test: blockdev writev readv 30 x 1block ...passed 00:05:41.173 Test: blockdev writev readv block ...passed 00:05:41.173 Test: blockdev writev readv size > 128k ...passed 00:05:41.173 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:41.173 Test: blockdev comparev and writev ...[2024-10-13 17:35:30.825294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c0e3c000 len:0x1000 00:05:41.173 [2024-10-13 17:35:30.825415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:41.173 passed 00:05:41.173 Test: blockdev nvme passthru rw ...passed 00:05:41.173 Test: blockdev nvme passthru vendor specific ...[2024-10-13 17:35:30.826364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:05:41.173 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:05:41.173 [2024-10-13 17:35:30.826695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:41.173 passed 00:05:41.173 Test: blockdev copy ...passed 00:05:41.173 Suite: bdevio tests on: Nvme2n1 00:05:41.173 Test: blockdev write read block ...passed 00:05:41.173 Test: blockdev write zeroes read block ...passed 00:05:41.173 Test: blockdev write zeroes read no split ...passed 00:05:41.173 Test: blockdev write zeroes read split ...passed 00:05:41.173 Test: blockdev write zeroes read split partial ...passed 00:05:41.173 Test: blockdev reset ...[2024-10-13 17:35:30.880435] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:05:41.173 [2024-10-13 17:35:30.883457] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:05:41.173 passed 00:05:41.173 Test: blockdev write read 8 blocks ...passed 00:05:41.173 Test: blockdev write read size > 128k ...passed 00:05:41.173 Test: blockdev write read invalid size ...passed 00:05:41.173 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:41.173 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:41.173 Test: blockdev write read max offset ...passed 00:05:41.173 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:41.173 Test: blockdev writev readv 8 blocks ...passed 00:05:41.173 Test: blockdev writev readv 30 x 1block ...passed 00:05:41.173 Test: blockdev writev readv block ...passed 00:05:41.173 Test: blockdev writev readv size > 128k ...passed 00:05:41.173 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:41.173 Test: blockdev comparev and writev ...[2024-10-13 17:35:30.890922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c0e38000 len:0x1000 00:05:41.173 [2024-10-13 17:35:30.890963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:41.173 passed 00:05:41.173 Test: blockdev nvme passthru rw ...passed 00:05:41.173 Test: blockdev nvme passthru vendor specific ...[2024-10-13 17:35:30.891539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:41.173 passed 00:05:41.173 Test: blockdev nvme admin passthru ...[2024-10-13 17:35:30.891573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:41.173 passed 00:05:41.173 Test: blockdev copy ...passed 00:05:41.173 Suite: bdevio tests on: Nvme1n1 00:05:41.173 Test: blockdev write read block ...passed 00:05:41.173 Test: blockdev write zeroes read block ...passed 00:05:41.173 Test: blockdev write zeroes read no split ...passed 00:05:41.173 Test: blockdev write zeroes read split ...passed 00:05:41.173 Test: blockdev write zeroes read split partial ...passed 00:05:41.173 Test: blockdev reset ...[2024-10-13 17:35:30.943747] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:05:41.173 [2024-10-13 17:35:30.946729] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:05:41.173 passed 00:05:41.173 Test: blockdev write read 8 blocks ...passed 00:05:41.173 Test: blockdev write read size > 128k ...passed 00:05:41.173 Test: blockdev write read invalid size ...passed 00:05:41.173 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:41.173 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:41.173 Test: blockdev write read max offset ...passed 00:05:41.173 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:41.173 Test: blockdev writev readv 8 blocks ...passed 00:05:41.173 Test: blockdev writev readv 30 x 1block ...passed 00:05:41.173 Test: blockdev writev readv block ...passed 00:05:41.173 Test: blockdev writev readv size > 128k ...passed 00:05:41.173 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:41.173 Test: blockdev comparev and writev ...[2024-10-13 17:35:30.955012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c0e34000 len:0x1000 00:05:41.173 [2024-10-13 17:35:30.955143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:41.173 passed 00:05:41.173 Test: blockdev nvme passthru rw ...passed 00:05:41.173 Test: blockdev nvme passthru vendor specific ...[2024-10-13 17:35:30.956011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:41.173 [2024-10-13 17:35:30.956122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:05:41.173 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:05:41.174 passed 00:05:41.174 Test: blockdev copy ...passed 00:05:41.174 Suite: bdevio tests on: Nvme0n1 00:05:41.174 Test: blockdev write read block ...passed 00:05:41.174 Test: blockdev write zeroes read block ...passed 00:05:41.174 Test: blockdev write zeroes read no split ...passed 00:05:41.431 Test: blockdev write zeroes read split ...passed 00:05:41.431 Test: blockdev write zeroes read split partial ...passed 00:05:41.431 Test: blockdev reset ...[2024-10-13 17:35:31.010679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:05:41.431 [2024-10-13 17:35:31.013380] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:05:41.431 passed 00:05:41.431 Test: blockdev write read 8 blocks ...passed 00:05:41.431 Test: blockdev write read size > 128k ...passed 00:05:41.431 Test: blockdev write read invalid size ...passed 00:05:41.431 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:41.431 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:41.431 Test: blockdev write read max offset ...passed 00:05:41.431 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:41.431 Test: blockdev writev readv 8 blocks ...passed 00:05:41.431 Test: blockdev writev readv 30 x 1block ...passed 00:05:41.431 Test: blockdev writev readv block ...passed 00:05:41.431 Test: blockdev writev readv size > 128k ...passed 00:05:41.431 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:41.431 Test: blockdev comparev and writev ...[2024-10-13 17:35:31.019880] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:05:41.431 separate metadata which is not supported yet. 00:05:41.431 passed 00:05:41.431 Test: blockdev nvme passthru rw ...passed 00:05:41.431 Test: blockdev nvme passthru vendor specific ...[2024-10-13 17:35:31.020613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 Ppassed 00:05:41.431 Test: blockdev nvme admin passthru ...passed 00:05:41.431 Test: blockdev copy ...RP2 0x0 00:05:41.431 [2024-10-13 17:35:31.020717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:05:41.431 passed 00:05:41.431 00:05:41.431 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.431 suites 6 6 n/a 0 0 00:05:41.431 tests 138 138 138 0 0 00:05:41.431 asserts 893 893 893 0 n/a 00:05:41.431 00:05:41.431 Elapsed time = 0.988 seconds 00:05:41.431 0 00:05:41.431 17:35:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 60313 00:05:41.431 17:35:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 60313 ']' 00:05:41.431 17:35:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 60313 00:05:41.431 17:35:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:05:41.431 17:35:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:41.431 17:35:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60313 00:05:41.431 17:35:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:41.431 17:35:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:41.431 17:35:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60313' 00:05:41.431 killing process with pid 60313 00:05:41.431 17:35:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 60313 00:05:41.431 17:35:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 60313 00:05:41.998 17:35:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:05:41.998 00:05:41.998 real 0m2.077s 00:05:41.998 user 0m5.340s 00:05:41.999 sys 0m0.289s 00:05:41.999 17:35:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.999 17:35:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:41.999 ************************************ 00:05:41.999 END TEST bdev_bounds 00:05:41.999 ************************************ 00:05:41.999 17:35:31 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:41.999 17:35:31 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:41.999 17:35:31 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.999 17:35:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:41.999 ************************************ 00:05:41.999 START TEST bdev_nbd 00:05:41.999 ************************************ 00:05:41.999 17:35:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:41.999 17:35:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:05:41.999 17:35:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:05:41.999 17:35:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.999 17:35:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:41.999 17:35:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:41.999 17:35:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:05:41.999 17:35:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:05:41.999 17:35:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:05:41.999 17:35:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:05:41.999 17:35:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:05:41.999 17:35:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:05:41.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:41.999 17:35:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:41.999 17:35:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:05:41.999 17:35:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:41.999 17:35:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:05:41.999 17:35:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60367 00:05:41.999 17:35:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:05:41.999 17:35:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60367 /var/tmp/spdk-nbd.sock 00:05:41.999 17:35:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 60367 ']' 00:05:41.999 17:35:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:41.999 17:35:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.999 17:35:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:41.999 17:35:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.999 17:35:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:05:41.999 17:35:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:42.259 [2024-10-13 17:35:31.847431] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:42.259 [2024-10-13 17:35:31.847577] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:42.259 [2024-10-13 17:35:32.001861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.519 [2024-10-13 17:35:32.113438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.089 17:35:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:43.089 17:35:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:05:43.089 17:35:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:05:43.089 17:35:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.089 17:35:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:43.089 17:35:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:05:43.089 17:35:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:05:43.089 17:35:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.089 17:35:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:43.089 17:35:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:05:43.089 17:35:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:05:43.089 17:35:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:05:43.089 17:35:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:05:43.089 17:35:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:43.089 17:35:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:05:43.349 17:35:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:05:43.349 17:35:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:05:43.349 17:35:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:05:43.349 17:35:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:43.349 17:35:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:43.349 17:35:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:43.349 17:35:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:43.349 17:35:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:43.349 17:35:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:43.349 17:35:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:43.349 17:35:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:43.349 17:35:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:43.349 1+0 records in 00:05:43.349 1+0 records out 00:05:43.349 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000706689 s, 5.8 MB/s 00:05:43.349 17:35:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:43.349 17:35:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:43.349 17:35:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:43.349 17:35:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:43.349 17:35:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:43.349 17:35:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:43.349 17:35:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:43.349 17:35:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:05:43.349 17:35:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:05:43.349 17:35:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:05:43.609 17:35:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:05:43.609 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:43.609 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:43.609 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:43.609 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:43.609 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:43.609 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:43.609 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:43.609 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:43.609 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:43.609 1+0 records in 00:05:43.609 1+0 records out 00:05:43.609 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494207 s, 8.3 MB/s 00:05:43.609 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:43.609 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:43.609 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:43.609 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:43.609 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:43.609 17:35:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:43.609 17:35:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:43.609 17:35:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:05:43.609 17:35:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:05:43.609 17:35:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:05:43.609 17:35:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:05:43.609 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:05:43.609 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:43.609 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:43.609 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:43.609 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:05:43.609 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:43.609 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:43.609 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:43.609 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:43.609 1+0 records in 00:05:43.609 1+0 records out 00:05:43.609 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000560496 s, 7.3 MB/s 00:05:43.609 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:43.609 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:43.609 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:43.609 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:43.609 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:43.609 17:35:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:43.609 17:35:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:43.609 17:35:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:44.177 1+0 records in 00:05:44.177 1+0 records out 00:05:44.177 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000549616 s, 7.5 MB/s 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:44.177 1+0 records in 00:05:44.177 1+0 records out 00:05:44.177 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000582943 s, 7.0 MB/s 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:44.177 17:35:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:05:44.435 17:35:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:05:44.435 17:35:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:05:44.435 17:35:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:05:44.435 17:35:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:05:44.435 17:35:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:44.435 17:35:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:44.435 17:35:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:44.435 17:35:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:05:44.435 17:35:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:44.435 17:35:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:44.435 17:35:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:44.436 17:35:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:44.436 1+0 records in 00:05:44.436 1+0 records out 00:05:44.436 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000515873 s, 7.9 MB/s 00:05:44.436 17:35:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:44.436 17:35:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:44.436 17:35:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:44.436 17:35:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:44.436 17:35:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:44.436 17:35:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:44.436 17:35:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:44.436 17:35:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.693 17:35:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:05:44.693 { 00:05:44.693 "nbd_device": "/dev/nbd0", 00:05:44.693 "bdev_name": "Nvme0n1" 00:05:44.693 }, 00:05:44.693 { 00:05:44.693 "nbd_device": "/dev/nbd1", 00:05:44.693 "bdev_name": "Nvme1n1" 00:05:44.693 }, 00:05:44.693 { 00:05:44.693 "nbd_device": "/dev/nbd2", 00:05:44.693 "bdev_name": "Nvme2n1" 00:05:44.693 }, 00:05:44.693 { 00:05:44.693 "nbd_device": "/dev/nbd3", 00:05:44.693 "bdev_name": "Nvme2n2" 00:05:44.693 }, 00:05:44.693 { 00:05:44.693 "nbd_device": "/dev/nbd4", 00:05:44.693 "bdev_name": "Nvme2n3" 00:05:44.693 }, 00:05:44.693 { 00:05:44.693 "nbd_device": "/dev/nbd5", 00:05:44.693 "bdev_name": "Nvme3n1" 00:05:44.693 } 00:05:44.693 ]' 00:05:44.693 17:35:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:05:44.693 17:35:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:05:44.693 { 00:05:44.693 "nbd_device": "/dev/nbd0", 00:05:44.693 "bdev_name": "Nvme0n1" 00:05:44.693 }, 00:05:44.693 { 00:05:44.693 "nbd_device": "/dev/nbd1", 00:05:44.693 "bdev_name": "Nvme1n1" 00:05:44.693 }, 00:05:44.693 { 00:05:44.693 "nbd_device": "/dev/nbd2", 00:05:44.693 "bdev_name": "Nvme2n1" 00:05:44.693 }, 00:05:44.693 { 00:05:44.693 "nbd_device": "/dev/nbd3", 00:05:44.693 "bdev_name": "Nvme2n2" 00:05:44.693 }, 00:05:44.693 { 00:05:44.693 "nbd_device": "/dev/nbd4", 00:05:44.693 "bdev_name": "Nvme2n3" 00:05:44.693 }, 00:05:44.693 { 00:05:44.693 "nbd_device": "/dev/nbd5", 00:05:44.693 "bdev_name": "Nvme3n1" 00:05:44.693 } 00:05:44.693 ]' 00:05:44.693 17:35:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:05:44.693 17:35:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:05:44.693 17:35:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.693 17:35:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:05:44.693 17:35:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:44.693 17:35:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:44.693 17:35:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.693 17:35:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:44.952 17:35:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:44.952 17:35:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:44.952 17:35:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:44.952 17:35:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.952 17:35:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.952 17:35:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:44.952 17:35:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:44.952 17:35:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.952 17:35:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.952 17:35:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:45.210 17:35:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:45.210 17:35:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:45.210 17:35:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:45.210 17:35:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.210 17:35:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.210 17:35:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:45.210 17:35:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:45.210 17:35:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.210 17:35:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.210 17:35:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:05:45.468 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:05:45.468 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:05:45.468 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:05:45.468 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.468 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.468 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:05:45.468 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:45.468 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.468 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.468 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:05:45.468 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:05:45.468 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:05:45.468 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:05:45.468 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.468 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.468 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:05:45.468 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:45.468 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.468 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.468 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:05:45.725 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:05:45.725 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:05:45.725 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:05:45.725 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.725 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.725 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:05:45.725 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:45.725 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.725 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.726 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:05:45.984 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:05:45.984 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:05:45.984 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:05:45.984 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.984 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.984 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:05:45.984 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:45.984 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.984 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.984 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.984 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.242 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:46.242 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:46.242 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.242 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:46.242 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:05:46.242 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.242 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:05:46.242 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:05:46.242 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:05:46.243 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:05:46.243 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:05:46.243 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:05:46.243 17:35:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:46.243 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.243 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:46.243 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:46.243 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:46.243 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:46.243 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:46.243 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.243 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:46.243 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:46.243 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:46.243 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:46.243 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:05:46.243 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:46.243 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:46.243 17:35:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:05:46.501 /dev/nbd0 00:05:46.501 17:35:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:46.501 17:35:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:46.501 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:46.501 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:46.501 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:46.501 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:46.501 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:46.501 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:46.501 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:46.501 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:46.501 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:46.501 1+0 records in 00:05:46.501 1+0 records out 00:05:46.501 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000737831 s, 5.6 MB/s 00:05:46.501 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:46.501 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:46.501 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:46.501 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:46.501 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:46.501 17:35:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.501 17:35:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:46.501 17:35:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:05:46.760 /dev/nbd1 00:05:46.760 17:35:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:46.760 17:35:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:46.760 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:46.760 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:46.760 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:46.760 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:46.760 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:46.760 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:46.760 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:46.760 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:46.760 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:46.760 1+0 records in 00:05:46.760 1+0 records out 00:05:46.760 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411901 s, 9.9 MB/s 00:05:46.760 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:46.760 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:46.760 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:46.760 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:46.760 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:46.760 17:35:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.760 17:35:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:46.760 17:35:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:05:47.018 /dev/nbd10 00:05:47.018 17:35:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:05:47.018 17:35:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:05:47.018 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:05:47.018 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:47.018 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:47.018 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:47.018 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:05:47.019 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:47.019 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:47.019 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:47.019 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:47.019 1+0 records in 00:05:47.019 1+0 records out 00:05:47.019 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408102 s, 10.0 MB/s 00:05:47.019 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:47.019 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:47.019 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:47.019 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:47.019 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:47.019 17:35:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:47.019 17:35:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:47.019 17:35:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:05:47.277 /dev/nbd11 00:05:47.277 17:35:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:05:47.277 17:35:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:05:47.277 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:05:47.277 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:47.277 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:47.277 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:47.277 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:05:47.277 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:47.277 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:47.277 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:47.277 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:47.277 1+0 records in 00:05:47.277 1+0 records out 00:05:47.277 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393871 s, 10.4 MB/s 00:05:47.277 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:47.277 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:47.277 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:47.277 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:47.277 17:35:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:47.277 17:35:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:47.277 17:35:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:47.277 17:35:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:05:47.277 /dev/nbd12 00:05:47.536 17:35:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:05:47.536 17:35:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:05:47.536 17:35:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:05:47.536 17:35:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:47.536 17:35:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:47.536 17:35:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:47.536 17:35:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:05:47.536 17:35:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:47.536 17:35:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:47.536 17:35:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:47.536 17:35:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:47.536 1+0 records in 00:05:47.536 1+0 records out 00:05:47.536 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000586402 s, 7.0 MB/s 00:05:47.536 17:35:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:47.536 17:35:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:47.536 17:35:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:47.536 17:35:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:47.536 17:35:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:47.536 17:35:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:47.536 17:35:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:47.536 17:35:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:05:47.536 /dev/nbd13 00:05:47.794 17:35:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:05:47.795 17:35:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:05:47.795 17:35:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:05:47.795 17:35:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:47.795 17:35:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:47.795 17:35:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:47.795 17:35:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:05:47.795 17:35:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:47.795 17:35:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:47.795 17:35:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:47.795 17:35:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:47.795 1+0 records in 00:05:47.795 1+0 records out 00:05:47.795 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000615453 s, 6.7 MB/s 00:05:47.795 17:35:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:47.795 17:35:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:47.795 17:35:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:47.795 17:35:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:47.795 17:35:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:47.795 17:35:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:47.795 17:35:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:47.795 17:35:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.795 17:35:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.795 17:35:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.795 17:35:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:47.795 { 00:05:47.795 "nbd_device": "/dev/nbd0", 00:05:47.795 "bdev_name": "Nvme0n1" 00:05:47.795 }, 00:05:47.795 { 00:05:47.795 "nbd_device": "/dev/nbd1", 00:05:47.795 "bdev_name": "Nvme1n1" 00:05:47.795 }, 00:05:47.795 { 00:05:47.795 "nbd_device": "/dev/nbd10", 00:05:47.795 "bdev_name": "Nvme2n1" 00:05:47.795 }, 00:05:47.795 { 00:05:47.795 "nbd_device": "/dev/nbd11", 00:05:47.795 "bdev_name": "Nvme2n2" 00:05:47.795 }, 00:05:47.795 { 00:05:47.795 "nbd_device": "/dev/nbd12", 00:05:47.795 "bdev_name": "Nvme2n3" 00:05:47.795 }, 00:05:47.795 { 00:05:47.795 "nbd_device": "/dev/nbd13", 00:05:47.795 "bdev_name": "Nvme3n1" 00:05:47.795 } 00:05:47.795 ]' 00:05:47.795 17:35:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:47.795 { 00:05:47.795 "nbd_device": "/dev/nbd0", 00:05:47.795 "bdev_name": "Nvme0n1" 00:05:47.795 }, 00:05:47.795 { 00:05:47.795 "nbd_device": "/dev/nbd1", 00:05:47.795 "bdev_name": "Nvme1n1" 00:05:47.795 }, 00:05:47.795 { 00:05:47.795 "nbd_device": "/dev/nbd10", 00:05:47.795 "bdev_name": "Nvme2n1" 00:05:47.795 }, 00:05:47.795 { 00:05:47.795 "nbd_device": "/dev/nbd11", 00:05:47.795 "bdev_name": "Nvme2n2" 00:05:47.795 }, 00:05:47.795 { 00:05:47.795 "nbd_device": "/dev/nbd12", 00:05:47.795 "bdev_name": "Nvme2n3" 00:05:47.795 }, 00:05:47.795 { 00:05:47.795 "nbd_device": "/dev/nbd13", 00:05:47.795 "bdev_name": "Nvme3n1" 00:05:47.795 } 00:05:47.795 ]' 00:05:47.795 17:35:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:48.053 17:35:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:48.053 /dev/nbd1 00:05:48.053 /dev/nbd10 00:05:48.053 /dev/nbd11 00:05:48.053 /dev/nbd12 00:05:48.053 /dev/nbd13' 00:05:48.053 17:35:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:48.053 /dev/nbd1 00:05:48.053 /dev/nbd10 00:05:48.053 /dev/nbd11 00:05:48.053 /dev/nbd12 00:05:48.053 /dev/nbd13' 00:05:48.053 17:35:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:48.053 17:35:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:05:48.053 17:35:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:05:48.053 17:35:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:05:48.053 17:35:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:05:48.053 17:35:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:05:48.053 17:35:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:48.053 17:35:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.053 17:35:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:48.053 17:35:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:48.053 17:35:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:48.053 17:35:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:05:48.053 256+0 records in 00:05:48.053 256+0 records out 00:05:48.053 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00769394 s, 136 MB/s 00:05:48.053 17:35:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.053 17:35:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:48.053 256+0 records in 00:05:48.053 256+0 records out 00:05:48.053 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0591262 s, 17.7 MB/s 00:05:48.053 17:35:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.053 17:35:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:48.053 256+0 records in 00:05:48.053 256+0 records out 00:05:48.053 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0568819 s, 18.4 MB/s 00:05:48.053 17:35:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.053 17:35:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:05:48.053 256+0 records in 00:05:48.053 256+0 records out 00:05:48.053 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0570042 s, 18.4 MB/s 00:05:48.053 17:35:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.053 17:35:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:05:48.317 256+0 records in 00:05:48.317 256+0 records out 00:05:48.317 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0767067 s, 13.7 MB/s 00:05:48.317 17:35:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.317 17:35:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:05:48.317 256+0 records in 00:05:48.317 256+0 records out 00:05:48.317 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.118728 s, 8.8 MB/s 00:05:48.317 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.317 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:05:48.317 256+0 records in 00:05:48.317 256+0 records out 00:05:48.317 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0782683 s, 13.4 MB/s 00:05:48.317 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:05:48.317 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:48.317 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.317 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:48.317 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:48.317 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:48.317 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:48.317 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.318 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:05:48.318 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.318 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:05:48.318 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.318 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:05:48.318 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.318 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:05:48.582 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.582 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:05:48.582 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.582 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:05:48.582 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:48.582 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:48.582 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.582 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:48.582 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:48.582 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:48.582 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:48.582 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:48.582 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:48.582 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:48.582 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:48.582 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:48.582 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:48.582 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:48.582 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:48.582 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:48.582 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:48.582 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:48.840 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:48.840 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:48.840 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:48.840 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:48.840 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:48.840 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:48.840 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:48.840 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:48.840 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:48.840 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:05:49.099 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:05:49.099 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:05:49.099 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:05:49.099 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.099 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.099 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:05:49.099 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:49.099 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.099 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.099 17:35:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:05:49.357 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:05:49.357 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:05:49.357 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:05:49.357 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.357 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.357 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:05:49.357 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:49.357 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.357 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.357 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:05:49.615 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:05:49.615 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:05:49.616 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:05:49.616 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.616 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.616 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:05:49.616 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:49.616 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.616 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.616 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:05:49.874 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:05:49.874 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:05:49.874 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:05:49.874 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.874 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.874 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:05:49.874 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:49.874 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.874 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.874 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.874 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.874 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:49.874 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:49.874 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.132 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:50.132 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:05:50.132 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.132 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:05:50.132 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:05:50.132 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:05:50.132 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:05:50.132 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:50.132 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:05:50.132 17:35:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:05:50.132 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.132 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:05:50.132 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:05:50.132 malloc_lvol_verify 00:05:50.132 17:35:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:05:50.390 2a512abb-6ccd-48d7-a941-f24841b607fa 00:05:50.390 17:35:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:05:50.647 5ee5b7a2-c4ce-4229-88a0-e58d3e7bc090 00:05:50.647 17:35:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:05:50.905 /dev/nbd0 00:05:50.905 17:35:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:05:50.905 17:35:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:05:50.905 17:35:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:05:50.905 17:35:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:05:50.905 17:35:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:05:50.905 mke2fs 1.47.0 (5-Feb-2023) 00:05:50.905 Discarding device blocks: 0/4096 done 00:05:50.905 Creating filesystem with 4096 1k blocks and 1024 inodes 00:05:50.905 00:05:50.905 Allocating group tables: 0/1 done 00:05:50.905 Writing inode tables: 0/1 done 00:05:50.905 Creating journal (1024 blocks): done 00:05:50.905 Writing superblocks and filesystem accounting information: 0/1 done 00:05:50.905 00:05:50.905 17:35:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:05:50.905 17:35:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.905 17:35:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:05:50.905 17:35:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:50.905 17:35:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:50.905 17:35:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.905 17:35:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:51.163 17:35:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:51.163 17:35:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:51.163 17:35:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:51.163 17:35:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.163 17:35:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.163 17:35:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:51.163 17:35:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:51.163 17:35:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.163 17:35:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60367 00:05:51.163 17:35:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 60367 ']' 00:05:51.163 17:35:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 60367 00:05:51.163 17:35:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:05:51.163 17:35:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:51.163 17:35:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60367 00:05:51.163 17:35:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:51.163 17:35:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:51.163 killing process with pid 60367 00:05:51.163 17:35:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60367' 00:05:51.163 17:35:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 60367 00:05:51.163 17:35:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 60367 00:05:52.097 17:35:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:05:52.097 00:05:52.097 real 0m9.829s 00:05:52.097 user 0m13.916s 00:05:52.097 sys 0m3.211s 00:05:52.097 17:35:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.097 17:35:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:05:52.097 ************************************ 00:05:52.097 END TEST bdev_nbd 00:05:52.097 ************************************ 00:05:52.097 17:35:41 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:05:52.097 17:35:41 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:05:52.097 skipping fio tests on NVMe due to multi-ns failures. 00:05:52.097 17:35:41 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:05:52.097 17:35:41 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:52.097 17:35:41 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:05:52.097 17:35:41 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:05:52.097 17:35:41 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.097 17:35:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:52.097 ************************************ 00:05:52.097 START TEST bdev_verify 00:05:52.097 ************************************ 00:05:52.097 17:35:41 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:05:52.097 [2024-10-13 17:35:41.716967] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:52.097 [2024-10-13 17:35:41.717101] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60741 ] 00:05:52.097 [2024-10-13 17:35:41.868018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.355 [2024-10-13 17:35:41.951286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.355 [2024-10-13 17:35:41.951380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.920 Running I/O for 5 seconds... 00:05:54.787 21184.00 IOPS, 82.75 MiB/s [2024-10-13T17:35:45.974Z] 22272.00 IOPS, 87.00 MiB/s [2024-10-13T17:35:46.907Z] 23658.67 IOPS, 92.42 MiB/s [2024-10-13T17:35:47.847Z] 24272.00 IOPS, 94.81 MiB/s 00:05:58.033 Latency(us) 00:05:58.033 [2024-10-13T17:35:47.847Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:58.033 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:58.033 Verification LBA range: start 0x0 length 0xbd0bd 00:05:58.033 Nvme0n1 : 5.03 1985.48 7.76 0.00 0.00 64288.96 13712.15 62914.56 00:05:58.033 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:58.033 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:05:58.034 Nvme0n1 : 5.05 1877.18 7.33 0.00 0.00 68020.06 11746.07 83482.78 00:05:58.034 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:58.034 Verification LBA range: start 0x0 length 0xa0000 00:05:58.034 Nvme1n1 : 5.03 1984.99 7.75 0.00 0.00 64191.39 15829.46 59688.17 00:05:58.034 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:58.034 Verification LBA range: start 0xa0000 length 0xa0000 00:05:58.034 Nvme1n1 : 5.05 1876.56 7.33 0.00 0.00 67929.05 13712.15 83482.78 00:05:58.034 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:58.034 Verification LBA range: start 0x0 length 0x80000 00:05:58.034 Nvme2n1 : 5.05 2000.42 7.81 0.00 0.00 63648.83 8922.98 56058.49 00:05:58.034 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:58.034 Verification LBA range: start 0x80000 length 0x80000 00:05:58.034 Nvme2n1 : 5.05 1875.97 7.33 0.00 0.00 67857.35 14720.39 81869.59 00:05:58.034 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:58.034 Verification LBA range: start 0x0 length 0x80000 00:05:58.034 Nvme2n2 : 5.06 1999.85 7.81 0.00 0.00 63566.50 8166.79 56865.08 00:05:58.034 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:58.034 Verification LBA range: start 0x80000 length 0x80000 00:05:58.034 Nvme2n2 : 5.05 1875.41 7.33 0.00 0.00 67774.75 16031.11 80659.69 00:05:58.034 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:58.034 Verification LBA range: start 0x0 length 0x80000 00:05:58.034 Nvme2n3 : 5.06 1999.29 7.81 0.00 0.00 63476.36 8771.74 60494.77 00:05:58.034 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:58.034 Verification LBA range: start 0x80000 length 0x80000 00:05:58.034 Nvme2n3 : 5.05 1874.81 7.32 0.00 0.00 67696.68 14417.92 81062.99 00:05:58.034 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:58.034 Verification LBA range: start 0x0 length 0x20000 00:05:58.034 Nvme3n1 : 5.06 1998.14 7.81 0.00 0.00 63395.05 8822.15 63721.16 00:05:58.034 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:58.034 Verification LBA range: start 0x20000 length 0x20000 00:05:58.034 Nvme3n1 : 5.06 1885.21 7.36 0.00 0.00 67252.84 2508.01 83886.08 00:05:58.034 [2024-10-13T17:35:47.848Z] =================================================================================================================== 00:05:58.034 [2024-10-13T17:35:47.848Z] Total : 23233.30 90.76 0.00 0.00 65696.91 2508.01 83886.08 00:05:58.964 00:05:58.964 real 0m7.082s 00:05:58.964 user 0m13.300s 00:05:58.964 sys 0m0.224s 00:05:58.964 17:35:48 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.964 17:35:48 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:05:58.964 ************************************ 00:05:58.964 END TEST bdev_verify 00:05:58.964 ************************************ 00:05:59.223 17:35:48 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:05:59.223 17:35:48 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:05:59.223 17:35:48 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.223 17:35:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:59.223 ************************************ 00:05:59.223 START TEST bdev_verify_big_io 00:05:59.223 ************************************ 00:05:59.223 17:35:48 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:05:59.223 [2024-10-13 17:35:48.847065] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:59.223 [2024-10-13 17:35:48.847157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60839 ] 00:05:59.223 [2024-10-13 17:35:48.983549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.480 [2024-10-13 17:35:49.065438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.480 [2024-10-13 17:35:49.065500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.045 Running I/O for 5 seconds... 00:06:06.122 1730.00 IOPS, 108.12 MiB/s [2024-10-13T17:35:56.870Z] 3282.00 IOPS, 205.12 MiB/s [2024-10-13T17:35:56.870Z] 3731.33 IOPS, 233.21 MiB/s 00:06:07.056 Latency(us) 00:06:07.056 [2024-10-13T17:35:56.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:07.056 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:07.056 Verification LBA range: start 0x0 length 0xbd0b 00:06:07.056 Nvme0n1 : 6.09 84.02 5.25 0.00 0.00 1459926.45 29239.14 2245565.83 00:06:07.056 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:07.056 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:07.056 Nvme0n1 : 5.60 125.76 7.86 0.00 0.00 967393.57 23895.43 1155046.79 00:06:07.056 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:07.056 Verification LBA range: start 0x0 length 0xa000 00:06:07.056 Nvme1n1 : 6.10 80.72 5.05 0.00 0.00 1397394.74 62511.26 1832588.21 00:06:07.056 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:07.056 Verification LBA range: start 0xa000 length 0xa000 00:06:07.056 Nvme1n1 : 5.69 134.91 8.43 0.00 0.00 886959.92 92355.35 974369.08 00:06:07.056 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:07.056 Verification LBA range: start 0x0 length 0x8000 00:06:07.056 Nvme2n1 : 6.10 83.98 5.25 0.00 0.00 1251973.12 75013.51 1406705.03 00:06:07.056 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:07.056 Verification LBA range: start 0x8000 length 0x8000 00:06:07.056 Nvme2n1 : 5.77 137.85 8.62 0.00 0.00 838384.04 77836.60 877577.45 00:06:07.056 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:07.056 Verification LBA range: start 0x0 length 0x8000 00:06:07.056 Nvme2n2 : 6.24 112.90 7.06 0.00 0.00 892430.21 10989.88 1245385.65 00:06:07.056 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:07.056 Verification LBA range: start 0x8000 length 0x8000 00:06:07.056 Nvme2n2 : 5.87 140.92 8.81 0.00 0.00 793120.32 91145.45 1077613.49 00:06:07.056 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:07.056 Verification LBA range: start 0x0 length 0x8000 00:06:07.056 Nvme2n3 : 6.53 198.71 12.42 0.00 0.00 476755.49 10989.88 1277649.53 00:06:07.056 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:07.056 Verification LBA range: start 0x8000 length 0x8000 00:06:07.056 Nvme2n3 : 5.96 141.39 8.84 0.00 0.00 766228.52 52025.50 1413157.81 00:06:07.056 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:07.056 Verification LBA range: start 0x0 length 0x2000 00:06:07.056 Nvme3n1 : 6.88 393.80 24.61 0.00 0.00 227519.00 1827.45 1303460.63 00:06:07.056 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:07.056 Verification LBA range: start 0x2000 length 0x2000 00:06:07.056 Nvme3n1 : 6.07 169.65 10.60 0.00 0.00 621793.81 869.61 1096971.82 00:06:07.056 [2024-10-13T17:35:56.870Z] =================================================================================================================== 00:06:07.056 [2024-10-13T17:35:56.870Z] Total : 1804.61 112.79 0.00 0.00 706679.09 869.61 2245565.83 00:06:08.430 00:06:08.430 real 0m9.251s 00:06:08.430 user 0m17.650s 00:06:08.430 sys 0m0.232s 00:06:08.430 17:35:58 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.430 17:35:58 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:08.430 ************************************ 00:06:08.430 END TEST bdev_verify_big_io 00:06:08.430 ************************************ 00:06:08.430 17:35:58 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:08.430 17:35:58 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:08.430 17:35:58 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.430 17:35:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:08.430 ************************************ 00:06:08.430 START TEST bdev_write_zeroes 00:06:08.430 ************************************ 00:06:08.430 17:35:58 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:08.430 [2024-10-13 17:35:58.158082] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:08.430 [2024-10-13 17:35:58.158210] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60962 ] 00:06:08.687 [2024-10-13 17:35:58.307895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.687 [2024-10-13 17:35:58.391583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.252 Running I/O for 1 seconds... 00:06:10.187 84096.00 IOPS, 328.50 MiB/s 00:06:10.187 Latency(us) 00:06:10.187 [2024-10-13T17:36:00.001Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:10.187 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:10.187 Nvme0n1 : 1.02 13941.40 54.46 0.00 0.00 9163.61 6805.66 19156.68 00:06:10.187 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:10.187 Nvme1n1 : 1.02 13925.17 54.40 0.00 0.00 9162.27 6856.07 18753.38 00:06:10.187 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:10.187 Nvme2n1 : 1.02 13909.33 54.33 0.00 0.00 9152.81 6856.07 18249.26 00:06:10.187 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:10.187 Nvme2n2 : 1.02 13893.53 54.27 0.00 0.00 9150.12 6805.66 17845.96 00:06:10.187 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:10.187 Nvme2n3 : 1.02 13877.72 54.21 0.00 0.00 9125.32 6251.13 17543.48 00:06:10.187 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:10.187 Nvme3n1 : 1.02 13861.95 54.15 0.00 0.00 9123.60 5948.65 19156.68 00:06:10.187 [2024-10-13T17:36:00.001Z] =================================================================================================================== 00:06:10.187 [2024-10-13T17:36:00.001Z] Total : 83409.10 325.82 0.00 0.00 9146.29 5948.65 19156.68 00:06:11.122 00:06:11.122 real 0m2.595s 00:06:11.122 user 0m2.298s 00:06:11.122 sys 0m0.185s 00:06:11.122 17:36:00 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.122 17:36:00 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:11.122 ************************************ 00:06:11.122 END TEST bdev_write_zeroes 00:06:11.122 ************************************ 00:06:11.122 17:36:00 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:11.122 17:36:00 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:11.122 17:36:00 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.122 17:36:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:11.122 ************************************ 00:06:11.122 START TEST bdev_json_nonenclosed 00:06:11.122 ************************************ 00:06:11.122 17:36:00 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:11.122 [2024-10-13 17:36:00.788145] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:11.122 [2024-10-13 17:36:00.788280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61015 ] 00:06:11.381 [2024-10-13 17:36:00.937171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.381 [2024-10-13 17:36:01.038509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.381 [2024-10-13 17:36:01.038601] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:11.381 [2024-10-13 17:36:01.038617] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:11.381 [2024-10-13 17:36:01.038627] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:11.639 00:06:11.639 real 0m0.488s 00:06:11.639 user 0m0.301s 00:06:11.639 sys 0m0.084s 00:06:11.639 17:36:01 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.639 17:36:01 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:11.639 ************************************ 00:06:11.639 END TEST bdev_json_nonenclosed 00:06:11.639 ************************************ 00:06:11.639 17:36:01 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:11.639 17:36:01 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:11.639 17:36:01 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.639 17:36:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:11.639 ************************************ 00:06:11.639 START TEST bdev_json_nonarray 00:06:11.639 ************************************ 00:06:11.639 17:36:01 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:11.639 [2024-10-13 17:36:01.328012] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:11.639 [2024-10-13 17:36:01.328147] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61035 ] 00:06:11.897 [2024-10-13 17:36:01.481716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.897 [2024-10-13 17:36:01.583529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.897 [2024-10-13 17:36:01.583626] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:11.897 [2024-10-13 17:36:01.583645] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:11.897 [2024-10-13 17:36:01.583654] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:12.155 00:06:12.155 real 0m0.500s 00:06:12.155 user 0m0.302s 00:06:12.155 sys 0m0.092s 00:06:12.155 17:36:01 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.155 17:36:01 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:12.155 ************************************ 00:06:12.155 END TEST bdev_json_nonarray 00:06:12.155 ************************************ 00:06:12.155 17:36:01 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:06:12.155 17:36:01 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:06:12.155 17:36:01 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:06:12.155 17:36:01 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:12.155 17:36:01 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:06:12.155 17:36:01 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:12.155 17:36:01 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:12.155 17:36:01 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:12.155 17:36:01 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:12.155 17:36:01 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:12.155 17:36:01 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:12.155 00:06:12.155 real 0m36.324s 00:06:12.155 user 0m57.095s 00:06:12.155 sys 0m5.206s 00:06:12.155 17:36:01 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.155 ************************************ 00:06:12.155 END TEST blockdev_nvme 00:06:12.155 ************************************ 00:06:12.155 17:36:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:12.155 17:36:01 -- spdk/autotest.sh@209 -- # uname -s 00:06:12.155 17:36:01 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:12.155 17:36:01 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:12.155 17:36:01 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:12.155 17:36:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.155 17:36:01 -- common/autotest_common.sh@10 -- # set +x 00:06:12.155 ************************************ 00:06:12.155 START TEST blockdev_nvme_gpt 00:06:12.155 ************************************ 00:06:12.155 17:36:01 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:12.155 * Looking for test storage... 00:06:12.155 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:12.155 17:36:01 blockdev_nvme_gpt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:12.155 17:36:01 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lcov --version 00:06:12.155 17:36:01 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:12.414 17:36:01 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:12.414 17:36:01 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.414 17:36:01 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.414 17:36:01 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.414 17:36:01 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.414 17:36:01 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.414 17:36:01 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.414 17:36:01 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.414 17:36:01 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.414 17:36:01 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.414 17:36:01 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.414 17:36:01 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.414 17:36:01 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:12.414 17:36:01 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:12.414 17:36:01 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.414 17:36:01 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.414 17:36:01 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:12.414 17:36:01 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:12.414 17:36:01 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.414 17:36:01 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:12.414 17:36:01 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.414 17:36:01 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:12.414 17:36:01 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:12.414 17:36:01 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.414 17:36:01 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:12.414 17:36:01 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.414 17:36:01 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.414 17:36:01 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.414 17:36:01 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:12.414 17:36:01 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.414 17:36:01 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:12.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.414 --rc genhtml_branch_coverage=1 00:06:12.414 --rc genhtml_function_coverage=1 00:06:12.414 --rc genhtml_legend=1 00:06:12.414 --rc geninfo_all_blocks=1 00:06:12.414 --rc geninfo_unexecuted_blocks=1 00:06:12.414 00:06:12.414 ' 00:06:12.414 17:36:01 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:12.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.414 --rc genhtml_branch_coverage=1 00:06:12.414 --rc genhtml_function_coverage=1 00:06:12.414 --rc genhtml_legend=1 00:06:12.414 --rc geninfo_all_blocks=1 00:06:12.414 --rc geninfo_unexecuted_blocks=1 00:06:12.414 00:06:12.414 ' 00:06:12.414 17:36:01 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:12.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.414 --rc genhtml_branch_coverage=1 00:06:12.414 --rc genhtml_function_coverage=1 00:06:12.414 --rc genhtml_legend=1 00:06:12.414 --rc geninfo_all_blocks=1 00:06:12.414 --rc geninfo_unexecuted_blocks=1 00:06:12.414 00:06:12.414 ' 00:06:12.414 17:36:01 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:12.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.414 --rc genhtml_branch_coverage=1 00:06:12.414 --rc genhtml_function_coverage=1 00:06:12.414 --rc genhtml_legend=1 00:06:12.414 --rc geninfo_all_blocks=1 00:06:12.414 --rc geninfo_unexecuted_blocks=1 00:06:12.414 00:06:12.414 ' 00:06:12.414 17:36:01 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:12.414 17:36:01 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:12.414 17:36:01 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:12.414 17:36:01 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:12.414 17:36:01 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:12.414 17:36:01 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:12.414 17:36:01 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:12.414 17:36:01 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:12.414 17:36:01 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:12.414 17:36:01 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:12.414 17:36:01 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:12.414 17:36:01 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:12.414 17:36:01 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:06:12.414 17:36:01 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:12.414 17:36:01 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:12.414 17:36:01 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:06:12.414 17:36:01 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:12.414 17:36:01 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:06:12.414 17:36:01 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:12.414 17:36:01 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:12.415 17:36:01 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:12.415 17:36:01 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:06:12.415 17:36:01 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:06:12.415 17:36:01 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:12.415 17:36:01 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61119 00:06:12.415 17:36:01 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:12.415 17:36:01 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 61119 00:06:12.415 17:36:01 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 61119 ']' 00:06:12.415 17:36:01 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.415 17:36:01 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.415 17:36:01 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.415 17:36:01 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.415 17:36:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:12.415 17:36:01 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:12.415 [2024-10-13 17:36:02.068994] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:12.415 [2024-10-13 17:36:02.069120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61119 ] 00:06:12.415 [2024-10-13 17:36:02.213061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.673 [2024-10-13 17:36:02.309505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.239 17:36:02 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.239 17:36:02 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:06:13.239 17:36:02 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:13.239 17:36:02 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:06:13.239 17:36:02 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:13.497 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:13.758 Waiting for block devices as requested 00:06:13.758 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:13.758 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:13.758 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:14.059 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:19.324 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:19.324 17:36:08 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:19.324 17:36:08 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:06:19.324 17:36:08 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:06:19.324 17:36:08 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:06:19.324 17:36:08 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:19.324 17:36:08 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:06:19.324 17:36:08 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:06:19.324 17:36:08 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:19.324 17:36:08 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:19.324 17:36:08 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:19.324 17:36:08 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:06:19.324 17:36:08 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:06:19.324 17:36:08 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:19.324 17:36:08 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:19.324 17:36:08 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:19.324 17:36:08 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:06:19.324 17:36:08 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:06:19.324 17:36:08 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:19.324 17:36:08 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:19.324 17:36:08 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:19.324 17:36:08 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:06:19.324 17:36:08 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:06:19.324 17:36:08 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:19.324 17:36:08 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:19.324 17:36:08 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:19.324 17:36:08 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:06:19.324 17:36:08 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:06:19.324 17:36:08 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:19.324 17:36:08 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:19.324 17:36:08 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:19.324 17:36:08 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:06:19.324 17:36:08 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:06:19.324 17:36:08 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:19.324 17:36:08 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:19.324 17:36:08 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:19.324 17:36:08 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:06:19.324 17:36:08 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:06:19.324 17:36:08 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:19.324 17:36:08 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:19.324 17:36:08 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:19.324 17:36:08 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:19.324 17:36:08 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:19.324 17:36:08 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:19.324 17:36:08 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:19.324 17:36:08 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:19.324 17:36:08 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:19.324 17:36:08 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:19.324 BYT; 00:06:19.324 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:19.324 17:36:08 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:19.324 BYT; 00:06:19.324 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:19.324 17:36:08 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:19.324 17:36:08 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:19.324 17:36:08 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:19.324 17:36:08 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:19.324 17:36:08 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:19.324 17:36:08 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:19.324 17:36:08 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:19.324 17:36:08 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:19.324 17:36:08 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:19.324 17:36:08 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:19.324 17:36:08 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:19.324 17:36:08 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:19.324 17:36:08 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:19.324 17:36:08 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:19.324 17:36:08 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:19.324 17:36:08 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:19.324 17:36:08 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:19.324 17:36:08 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:19.324 17:36:08 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:19.325 17:36:08 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:19.325 17:36:08 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:19.325 17:36:08 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:19.325 17:36:08 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:19.325 17:36:08 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:19.325 17:36:08 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:19.325 17:36:08 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:19.325 17:36:08 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:19.325 17:36:08 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:19.325 17:36:08 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:20.259 The operation has completed successfully. 00:06:20.259 17:36:09 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:21.194 The operation has completed successfully. 00:06:21.194 17:36:10 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:21.452 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:22.018 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:22.018 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:22.018 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:22.018 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:22.018 17:36:11 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:22.018 17:36:11 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.018 17:36:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:22.018 [] 00:06:22.018 17:36:11 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.018 17:36:11 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:22.018 17:36:11 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:22.018 17:36:11 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:22.018 17:36:11 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:22.018 17:36:11 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:22.018 17:36:11 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.018 17:36:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:22.585 17:36:12 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.585 17:36:12 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:22.585 17:36:12 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.585 17:36:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:22.585 17:36:12 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.585 17:36:12 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:06:22.585 17:36:12 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:22.585 17:36:12 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.585 17:36:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:22.585 17:36:12 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.585 17:36:12 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:22.585 17:36:12 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.585 17:36:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:22.585 17:36:12 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.585 17:36:12 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:22.585 17:36:12 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.585 17:36:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:22.585 17:36:12 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.585 17:36:12 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:22.585 17:36:12 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:22.585 17:36:12 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.585 17:36:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:22.585 17:36:12 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:22.585 17:36:12 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.585 17:36:12 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:22.585 17:36:12 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:22.586 17:36:12 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "82407e0b-2384-4383-a4b2-be75f6cb8513"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "82407e0b-2384-4383-a4b2-be75f6cb8513",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "ce70ca23-81a4-4f51-8c0e-76b13ce56422"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ce70ca23-81a4-4f51-8c0e-76b13ce56422",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "52a52610-7d46-4766-8058-9f2e00ade8c3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "52a52610-7d46-4766-8058-9f2e00ade8c3",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "a58dac76-51bf-4a63-965c-59d2c54e07b5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a58dac76-51bf-4a63-965c-59d2c54e07b5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "90a859ee-0dac-4d95-98ad-25b42b6a9934"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "90a859ee-0dac-4d95-98ad-25b42b6a9934",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:22.586 17:36:12 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:22.586 17:36:12 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:22.586 17:36:12 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:22.586 17:36:12 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 61119 00:06:22.586 17:36:12 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 61119 ']' 00:06:22.586 17:36:12 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 61119 00:06:22.586 17:36:12 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:06:22.586 17:36:12 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:22.586 17:36:12 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61119 00:06:22.586 17:36:12 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:22.586 killing process with pid 61119 00:06:22.586 17:36:12 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:22.586 17:36:12 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61119' 00:06:22.586 17:36:12 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 61119 00:06:22.586 17:36:12 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 61119 00:06:23.963 17:36:13 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:23.963 17:36:13 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:23.963 17:36:13 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:23.963 17:36:13 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.963 17:36:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:23.963 ************************************ 00:06:23.963 START TEST bdev_hello_world 00:06:23.963 ************************************ 00:06:23.963 17:36:13 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:23.963 [2024-10-13 17:36:13.509175] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:23.963 [2024-10-13 17:36:13.509315] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61737 ] 00:06:23.963 [2024-10-13 17:36:13.647956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.963 [2024-10-13 17:36:13.730335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.529 [2024-10-13 17:36:14.218496] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:24.529 [2024-10-13 17:36:14.218537] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:24.529 [2024-10-13 17:36:14.218554] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:24.529 [2024-10-13 17:36:14.220510] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:24.529 [2024-10-13 17:36:14.220995] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:24.529 [2024-10-13 17:36:14.221021] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:24.529 [2024-10-13 17:36:14.221197] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:24.529 00:06:24.529 [2024-10-13 17:36:14.221216] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:25.095 00:06:25.095 real 0m1.327s 00:06:25.095 user 0m1.065s 00:06:25.095 sys 0m0.158s 00:06:25.095 17:36:14 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.095 17:36:14 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:25.095 ************************************ 00:06:25.095 END TEST bdev_hello_world 00:06:25.095 ************************************ 00:06:25.095 17:36:14 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:25.095 17:36:14 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:25.095 17:36:14 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.095 17:36:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:25.095 ************************************ 00:06:25.095 START TEST bdev_bounds 00:06:25.095 ************************************ 00:06:25.095 17:36:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:06:25.095 17:36:14 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61768 00:06:25.095 17:36:14 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:25.095 Process bdevio pid: 61768 00:06:25.095 17:36:14 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61768' 00:06:25.095 17:36:14 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61768 00:06:25.095 17:36:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 61768 ']' 00:06:25.095 17:36:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.095 17:36:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:25.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.095 17:36:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.095 17:36:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:25.095 17:36:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:25.095 17:36:14 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:25.095 [2024-10-13 17:36:14.884219] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:25.095 [2024-10-13 17:36:14.884354] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61768 ] 00:06:25.353 [2024-10-13 17:36:15.034458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:25.353 [2024-10-13 17:36:15.134476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.353 [2024-10-13 17:36:15.134668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.353 [2024-10-13 17:36:15.134801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.921 17:36:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:25.921 17:36:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:06:25.921 17:36:15 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:26.179 I/O targets: 00:06:26.179 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:26.179 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:26.179 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:26.179 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:26.179 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:26.179 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:26.179 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:26.179 00:06:26.179 00:06:26.179 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.179 http://cunit.sourceforge.net/ 00:06:26.179 00:06:26.179 00:06:26.179 Suite: bdevio tests on: Nvme3n1 00:06:26.179 Test: blockdev write read block ...passed 00:06:26.179 Test: blockdev write zeroes read block ...passed 00:06:26.179 Test: blockdev write zeroes read no split ...passed 00:06:26.179 Test: blockdev write zeroes read split ...passed 00:06:26.179 Test: blockdev write zeroes read split partial ...passed 00:06:26.179 Test: blockdev reset ...[2024-10-13 17:36:15.854438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:06:26.179 [2024-10-13 17:36:15.857271] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:26.179 passed 00:06:26.179 Test: blockdev write read 8 blocks ...passed 00:06:26.179 Test: blockdev write read size > 128k ...passed 00:06:26.179 Test: blockdev write read invalid size ...passed 00:06:26.179 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:26.179 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:26.179 Test: blockdev write read max offset ...passed 00:06:26.179 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:26.179 Test: blockdev writev readv 8 blocks ...passed 00:06:26.179 Test: blockdev writev readv 30 x 1block ...passed 00:06:26.179 Test: blockdev writev readv block ...passed 00:06:26.179 Test: blockdev writev readv size > 128k ...passed 00:06:26.179 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:26.179 Test: blockdev comparev and writev ...[2024-10-13 17:36:15.864969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2aa004000 len:0x1000 00:06:26.179 [2024-10-13 17:36:15.865089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:26.179 passed 00:06:26.179 Test: blockdev nvme passthru rw ...passed 00:06:26.179 Test: blockdev nvme passthru vendor specific ...[2024-10-13 17:36:15.866424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:26.179 [2024-10-13 17:36:15.866517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:26.179 passed 00:06:26.179 Test: blockdev nvme admin passthru ...passed 00:06:26.179 Test: blockdev copy ...passed 00:06:26.179 Suite: bdevio tests on: Nvme2n3 00:06:26.179 Test: blockdev write read block ...passed 00:06:26.179 Test: blockdev write zeroes read block ...passed 00:06:26.179 Test: blockdev write zeroes read no split ...passed 00:06:26.179 Test: blockdev write zeroes read split ...passed 00:06:26.179 Test: blockdev write zeroes read split partial ...passed 00:06:26.179 Test: blockdev reset ...[2024-10-13 17:36:15.925846] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:26.179 [2024-10-13 17:36:15.929188] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:26.179 passed 00:06:26.179 Test: blockdev write read 8 blocks ...passed 00:06:26.179 Test: blockdev write read size > 128k ...passed 00:06:26.179 Test: blockdev write read invalid size ...passed 00:06:26.179 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:26.179 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:26.179 Test: blockdev write read max offset ...passed 00:06:26.179 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:26.179 Test: blockdev writev readv 8 blocks ...passed 00:06:26.179 Test: blockdev writev readv 30 x 1block ...passed 00:06:26.179 Test: blockdev writev readv block ...passed 00:06:26.179 Test: blockdev writev readv size > 128k ...passed 00:06:26.179 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:26.179 Test: blockdev comparev and writev ...[2024-10-13 17:36:15.939941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2aa002000 len:0x1000 00:06:26.179 [2024-10-13 17:36:15.940047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:26.179 passed 00:06:26.179 Test: blockdev nvme passthru rw ...passed 00:06:26.179 Test: blockdev nvme passthru vendor specific ...[2024-10-13 17:36:15.942106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:26.179 passed 00:06:26.179 Test: blockdev nvme admin passthru ...[2024-10-13 17:36:15.942193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:26.179 passed 00:06:26.179 Test: blockdev copy ...passed 00:06:26.179 Suite: bdevio tests on: Nvme2n2 00:06:26.179 Test: blockdev write read block ...passed 00:06:26.179 Test: blockdev write zeroes read block ...passed 00:06:26.179 Test: blockdev write zeroes read no split ...passed 00:06:26.179 Test: blockdev write zeroes read split ...passed 00:06:26.438 Test: blockdev write zeroes read split partial ...passed 00:06:26.438 Test: blockdev reset ...[2024-10-13 17:36:16.000362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:26.438 [2024-10-13 17:36:16.003850] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:26.438 passed 00:06:26.438 Test: blockdev write read 8 blocks ...passed 00:06:26.438 Test: blockdev write read size > 128k ...passed 00:06:26.438 Test: blockdev write read invalid size ...passed 00:06:26.438 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:26.438 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:26.438 Test: blockdev write read max offset ...passed 00:06:26.438 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:26.438 Test: blockdev writev readv 8 blocks ...passed 00:06:26.438 Test: blockdev writev readv 30 x 1block ...passed 00:06:26.438 Test: blockdev writev readv block ...passed 00:06:26.438 Test: blockdev writev readv size > 128k ...passed 00:06:26.438 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:26.438 Test: blockdev comparev and writev ...[2024-10-13 17:36:16.019984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9e38000 len:0x1000 00:06:26.438 [2024-10-13 17:36:16.020083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:26.438 passed 00:06:26.438 Test: blockdev nvme passthru rw ...passed 00:06:26.438 Test: blockdev nvme passthru vendor specific ...[2024-10-13 17:36:16.021893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:26.438 [2024-10-13 17:36:16.021968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:26.438 passed 00:06:26.438 Test: blockdev nvme admin passthru ...passed 00:06:26.438 Test: blockdev copy ...passed 00:06:26.438 Suite: bdevio tests on: Nvme2n1 00:06:26.438 Test: blockdev write read block ...passed 00:06:26.438 Test: blockdev write zeroes read block ...passed 00:06:26.438 Test: blockdev write zeroes read no split ...passed 00:06:26.438 Test: blockdev write zeroes read split ...passed 00:06:26.438 Test: blockdev write zeroes read split partial ...passed 00:06:26.438 Test: blockdev reset ...[2024-10-13 17:36:16.084683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:26.438 [2024-10-13 17:36:16.088538] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:26.438 passed 00:06:26.438 Test: blockdev write read 8 blocks ...passed 00:06:26.438 Test: blockdev write read size > 128k ...passed 00:06:26.438 Test: blockdev write read invalid size ...passed 00:06:26.438 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:26.438 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:26.438 Test: blockdev write read max offset ...passed 00:06:26.438 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:26.438 Test: blockdev writev readv 8 blocks ...passed 00:06:26.438 Test: blockdev writev readv 30 x 1block ...passed 00:06:26.438 Test: blockdev writev readv block ...passed 00:06:26.438 Test: blockdev writev readv size > 128k ...passed 00:06:26.438 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:26.438 Test: blockdev comparev and writev ...[2024-10-13 17:36:16.104662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9e34000 len:0x1000 00:06:26.438 [2024-10-13 17:36:16.104780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:26.438 passed 00:06:26.438 Test: blockdev nvme passthru rw ...passed 00:06:26.438 Test: blockdev nvme passthru vendor specific ...[2024-10-13 17:36:16.107194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:26.438 [2024-10-13 17:36:16.107267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:26.438 passed 00:06:26.438 Test: blockdev nvme admin passthru ...passed 00:06:26.438 Test: blockdev copy ...passed 00:06:26.438 Suite: bdevio tests on: Nvme1n1p2 00:06:26.438 Test: blockdev write read block ...passed 00:06:26.438 Test: blockdev write zeroes read block ...passed 00:06:26.438 Test: blockdev write zeroes read no split ...passed 00:06:26.438 Test: blockdev write zeroes read split ...passed 00:06:26.438 Test: blockdev write zeroes read split partial ...passed 00:06:26.438 Test: blockdev reset ...[2024-10-13 17:36:16.166310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:06:26.438 [2024-10-13 17:36:16.168713] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:26.438 passed 00:06:26.438 Test: blockdev write read 8 blocks ...passed 00:06:26.438 Test: blockdev write read size > 128k ...passed 00:06:26.438 Test: blockdev write read invalid size ...passed 00:06:26.438 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:26.438 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:26.438 Test: blockdev write read max offset ...passed 00:06:26.438 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:26.438 Test: blockdev writev readv 8 blocks ...passed 00:06:26.438 Test: blockdev writev readv 30 x 1block ...passed 00:06:26.438 Test: blockdev writev readv block ...passed 00:06:26.438 Test: blockdev writev readv size > 128k ...passed 00:06:26.438 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:26.438 Test: blockdev comparev and writev ...[2024-10-13 17:36:16.177355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2c9e30000 len:0x1000 00:06:26.438 [2024-10-13 17:36:16.177438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:26.438 passed 00:06:26.438 Test: blockdev nvme passthru rw ...passed 00:06:26.438 Test: blockdev nvme passthru vendor specific ...passed 00:06:26.438 Test: blockdev nvme admin passthru ...passed 00:06:26.438 Test: blockdev copy ...passed 00:06:26.438 Suite: bdevio tests on: Nvme1n1p1 00:06:26.438 Test: blockdev write read block ...passed 00:06:26.438 Test: blockdev write zeroes read block ...passed 00:06:26.438 Test: blockdev write zeroes read no split ...passed 00:06:26.438 Test: blockdev write zeroes read split ...passed 00:06:26.438 Test: blockdev write zeroes read split partial ...passed 00:06:26.438 Test: blockdev reset ...[2024-10-13 17:36:16.228068] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:06:26.438 [2024-10-13 17:36:16.231196] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:26.438 passed 00:06:26.438 Test: blockdev write read 8 blocks ...passed 00:06:26.438 Test: blockdev write read size > 128k ...passed 00:06:26.438 Test: blockdev write read invalid size ...passed 00:06:26.438 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:26.438 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:26.438 Test: blockdev write read max offset ...passed 00:06:26.438 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:26.438 Test: blockdev writev readv 8 blocks ...passed 00:06:26.438 Test: blockdev writev readv 30 x 1block ...passed 00:06:26.438 Test: blockdev writev readv block ...passed 00:06:26.438 Test: blockdev writev readv size > 128k ...passed 00:06:26.438 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:26.438 Test: blockdev comparev and writev ...[2024-10-13 17:36:16.247362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2aaa0e000 len:0x1000 00:06:26.438 [2024-10-13 17:36:16.247451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:26.438 passed 00:06:26.438 Test: blockdev nvme passthru rw ...passed 00:06:26.438 Test: blockdev nvme passthru vendor specific ...passed 00:06:26.438 Test: blockdev nvme admin passthru ...passed 00:06:26.438 Test: blockdev copy ...passed 00:06:26.438 Suite: bdevio tests on: Nvme0n1 00:06:26.438 Test: blockdev write read block ...passed 00:06:26.698 Test: blockdev write zeroes read block ...passed 00:06:26.698 Test: blockdev write zeroes read no split ...passed 00:06:26.698 Test: blockdev write zeroes read split ...passed 00:06:26.698 Test: blockdev write zeroes read split partial ...passed 00:06:26.698 Test: blockdev reset ...[2024-10-13 17:36:16.301145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:06:26.698 [2024-10-13 17:36:16.305050] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:26.698 passed 00:06:26.698 Test: blockdev write read 8 blocks ...passed 00:06:26.698 Test: blockdev write read size > 128k ...passed 00:06:26.698 Test: blockdev write read invalid size ...passed 00:06:26.698 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:26.698 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:26.698 Test: blockdev write read max offset ...passed 00:06:26.698 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:26.698 Test: blockdev writev readv 8 blocks ...passed 00:06:26.698 Test: blockdev writev readv 30 x 1block ...passed 00:06:26.698 Test: blockdev writev readv block ...passed 00:06:26.698 Test: blockdev writev readv size > 128k ...passed 00:06:26.698 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:26.698 Test: blockdev comparev and writev ...passed 00:06:26.698 Test: blockdev nvme passthru rw ...[2024-10-13 17:36:16.318765] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:26.698 separate metadata which is not supported yet. 00:06:26.698 passed 00:06:26.698 Test: blockdev nvme passthru vendor specific ...[2024-10-13 17:36:16.320111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:26.698 [2024-10-13 17:36:16.320360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:26.698 passed 00:06:26.698 Test: blockdev nvme admin passthru ...passed 00:06:26.698 Test: blockdev copy ...passed 00:06:26.698 00:06:26.698 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.698 suites 7 7 n/a 0 0 00:06:26.698 tests 161 161 161 0 0 00:06:26.698 asserts 1025 1025 1025 0 n/a 00:06:26.698 00:06:26.698 Elapsed time = 1.323 seconds 00:06:26.698 0 00:06:26.698 17:36:16 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61768 00:06:26.698 17:36:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 61768 ']' 00:06:26.698 17:36:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 61768 00:06:26.698 17:36:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:06:26.699 17:36:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:26.699 17:36:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61768 00:06:26.699 17:36:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:26.699 17:36:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:26.699 killing process with pid 61768 00:06:26.699 17:36:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61768' 00:06:26.699 17:36:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 61768 00:06:26.699 17:36:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 61768 00:06:27.267 17:36:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:27.267 00:06:27.267 real 0m2.233s 00:06:27.267 user 0m5.653s 00:06:27.267 sys 0m0.297s 00:06:27.267 17:36:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.267 17:36:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:27.267 ************************************ 00:06:27.267 END TEST bdev_bounds 00:06:27.267 ************************************ 00:06:27.527 17:36:17 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:27.527 17:36:17 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:27.527 17:36:17 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.527 17:36:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:27.527 ************************************ 00:06:27.527 START TEST bdev_nbd 00:06:27.527 ************************************ 00:06:27.527 17:36:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:27.527 17:36:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:27.527 17:36:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:27.527 17:36:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.527 17:36:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:27.527 17:36:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:27.527 17:36:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:27.527 17:36:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:27.527 17:36:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:27.527 17:36:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:27.527 17:36:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:27.527 17:36:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:27.527 17:36:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:27.527 17:36:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:27.527 17:36:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:27.527 17:36:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:27.527 17:36:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61828 00:06:27.527 17:36:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:27.527 17:36:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61828 /var/tmp/spdk-nbd.sock 00:06:27.527 17:36:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 61828 ']' 00:06:27.527 17:36:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:27.527 17:36:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.527 17:36:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:27.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:27.527 17:36:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:27.527 17:36:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.527 17:36:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:27.527 [2024-10-13 17:36:17.186180] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:27.527 [2024-10-13 17:36:17.186307] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:27.527 [2024-10-13 17:36:17.334447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.786 [2024-10-13 17:36:17.432018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.356 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.356 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:06:28.356 17:36:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:28.356 17:36:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.356 17:36:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:28.356 17:36:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:28.356 17:36:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:28.356 17:36:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.356 17:36:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:28.356 17:36:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:28.356 17:36:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:28.356 17:36:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:28.356 17:36:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:28.356 17:36:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:28.356 17:36:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:28.616 17:36:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:28.616 17:36:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:28.616 17:36:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:28.616 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:28.616 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:28.616 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:28.616 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:28.616 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:28.616 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:28.616 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:28.616 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:28.616 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:28.616 1+0 records in 00:06:28.616 1+0 records out 00:06:28.616 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00087301 s, 4.7 MB/s 00:06:28.616 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.616 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:28.616 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.616 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:28.616 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:28.616 17:36:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:28.616 17:36:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:28.616 17:36:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:06:28.875 17:36:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:28.875 17:36:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:28.875 17:36:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:28.875 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:28.875 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:28.875 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:28.875 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:28.875 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:28.875 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:28.875 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:28.875 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:28.875 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:28.875 1+0 records in 00:06:28.875 1+0 records out 00:06:28.875 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418575 s, 9.8 MB/s 00:06:28.875 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.875 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:28.875 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.875 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:28.875 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:28.875 17:36:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:28.875 17:36:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:28.875 17:36:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:06:29.133 17:36:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:29.133 17:36:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:29.133 17:36:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:29.133 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:06:29.133 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:29.133 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:29.133 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:29.133 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:06:29.133 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:29.133 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:29.133 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:29.133 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:29.133 1+0 records in 00:06:29.133 1+0 records out 00:06:29.133 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038784 s, 10.6 MB/s 00:06:29.133 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.133 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:29.133 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.133 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:29.133 17:36:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:29.133 17:36:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:29.133 17:36:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:29.133 17:36:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:29.391 17:36:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:29.391 17:36:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:29.391 17:36:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:29.391 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:06:29.391 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:29.391 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:29.391 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:29.391 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:06:29.391 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:29.391 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:29.391 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:29.391 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:29.391 1+0 records in 00:06:29.391 1+0 records out 00:06:29.391 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000642742 s, 6.4 MB/s 00:06:29.391 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.391 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:29.391 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.391 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:29.391 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:29.391 17:36:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:29.391 17:36:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:29.391 17:36:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:29.679 17:36:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:29.679 17:36:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:29.679 17:36:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:29.679 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:06:29.679 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:29.679 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:29.679 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:29.679 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:06:29.679 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:29.679 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:29.679 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:29.679 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:29.679 1+0 records in 00:06:29.679 1+0 records out 00:06:29.679 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000632347 s, 6.5 MB/s 00:06:29.679 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.679 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:29.679 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.679 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:29.679 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:29.679 17:36:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:29.679 17:36:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:29.679 17:36:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:29.937 1+0 records in 00:06:29.937 1+0 records out 00:06:29.937 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000532405 s, 7.7 MB/s 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:29.937 1+0 records in 00:06:29.937 1+0 records out 00:06:29.937 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000584816 s, 7.0 MB/s 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:29.937 17:36:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.195 17:36:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:30.195 { 00:06:30.195 "nbd_device": "/dev/nbd0", 00:06:30.195 "bdev_name": "Nvme0n1" 00:06:30.195 }, 00:06:30.195 { 00:06:30.195 "nbd_device": "/dev/nbd1", 00:06:30.195 "bdev_name": "Nvme1n1p1" 00:06:30.195 }, 00:06:30.195 { 00:06:30.195 "nbd_device": "/dev/nbd2", 00:06:30.195 "bdev_name": "Nvme1n1p2" 00:06:30.195 }, 00:06:30.195 { 00:06:30.195 "nbd_device": "/dev/nbd3", 00:06:30.195 "bdev_name": "Nvme2n1" 00:06:30.195 }, 00:06:30.195 { 00:06:30.195 "nbd_device": "/dev/nbd4", 00:06:30.195 "bdev_name": "Nvme2n2" 00:06:30.195 }, 00:06:30.195 { 00:06:30.195 "nbd_device": "/dev/nbd5", 00:06:30.195 "bdev_name": "Nvme2n3" 00:06:30.195 }, 00:06:30.195 { 00:06:30.195 "nbd_device": "/dev/nbd6", 00:06:30.195 "bdev_name": "Nvme3n1" 00:06:30.195 } 00:06:30.195 ]' 00:06:30.195 17:36:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:30.195 17:36:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:30.195 { 00:06:30.195 "nbd_device": "/dev/nbd0", 00:06:30.195 "bdev_name": "Nvme0n1" 00:06:30.195 }, 00:06:30.195 { 00:06:30.195 "nbd_device": "/dev/nbd1", 00:06:30.195 "bdev_name": "Nvme1n1p1" 00:06:30.195 }, 00:06:30.195 { 00:06:30.195 "nbd_device": "/dev/nbd2", 00:06:30.195 "bdev_name": "Nvme1n1p2" 00:06:30.195 }, 00:06:30.195 { 00:06:30.195 "nbd_device": "/dev/nbd3", 00:06:30.195 "bdev_name": "Nvme2n1" 00:06:30.195 }, 00:06:30.195 { 00:06:30.195 "nbd_device": "/dev/nbd4", 00:06:30.195 "bdev_name": "Nvme2n2" 00:06:30.195 }, 00:06:30.195 { 00:06:30.195 "nbd_device": "/dev/nbd5", 00:06:30.195 "bdev_name": "Nvme2n3" 00:06:30.195 }, 00:06:30.195 { 00:06:30.195 "nbd_device": "/dev/nbd6", 00:06:30.195 "bdev_name": "Nvme3n1" 00:06:30.195 } 00:06:30.195 ]' 00:06:30.195 17:36:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:30.195 17:36:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:06:30.195 17:36:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.195 17:36:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:06:30.195 17:36:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:30.195 17:36:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:30.195 17:36:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.195 17:36:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:30.452 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:30.452 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:30.452 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:30.452 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.452 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.452 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:30.452 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:30.452 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.452 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.452 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:30.710 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:30.710 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:30.710 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:30.710 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.710 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.710 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:30.710 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:30.710 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.710 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.710 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:30.967 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:30.967 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:30.967 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:30.967 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.967 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.967 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:30.967 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:30.967 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.967 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.967 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:31.223 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:31.223 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:31.223 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:31.223 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.223 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.223 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:31.223 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:31.223 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.223 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.223 17:36:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:31.223 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:31.223 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:31.223 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:31.223 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.224 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.224 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:31.224 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:31.224 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.224 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.224 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:31.480 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:31.480 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:31.480 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:31.480 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.480 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.480 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:31.480 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:31.480 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.480 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.480 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:06:31.737 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:06:31.737 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:06:31.737 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:06:31.737 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.737 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.737 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:06:31.737 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:31.737 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.737 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.737 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.737 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.995 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:31.995 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.995 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:31.995 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:31.995 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:31.995 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.995 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:31.995 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:31.995 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:31.995 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:31.995 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:31.995 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:31.995 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:31.995 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.995 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:31.995 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:31.995 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:31.995 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:31.995 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:31.995 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.995 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:31.995 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:31.995 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:31.995 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:31.995 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:31.995 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:31.995 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:31.995 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:32.252 /dev/nbd0 00:06:32.252 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:32.252 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:32.252 17:36:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:32.252 17:36:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:32.252 17:36:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:32.252 17:36:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:32.252 17:36:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:32.252 17:36:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:32.252 17:36:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:32.252 17:36:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:32.252 17:36:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:32.252 1+0 records in 00:06:32.252 1+0 records out 00:06:32.252 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349676 s, 11.7 MB/s 00:06:32.252 17:36:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.252 17:36:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:32.252 17:36:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.252 17:36:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:32.252 17:36:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:32.252 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.252 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:32.252 17:36:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:06:32.510 /dev/nbd1 00:06:32.510 17:36:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:32.510 17:36:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:32.510 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:32.510 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:32.510 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:32.510 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:32.510 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:32.510 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:32.510 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:32.510 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:32.510 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:32.510 1+0 records in 00:06:32.510 1+0 records out 00:06:32.510 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514457 s, 8.0 MB/s 00:06:32.510 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.510 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:32.510 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.510 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:32.510 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:32.510 17:36:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.510 17:36:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:32.510 17:36:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:06:32.767 /dev/nbd10 00:06:32.767 17:36:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:32.767 17:36:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:32.767 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:06:32.767 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:32.767 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:32.767 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:32.767 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:06:32.767 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:32.767 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:32.767 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:32.767 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:32.767 1+0 records in 00:06:32.767 1+0 records out 00:06:32.767 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000546168 s, 7.5 MB/s 00:06:32.767 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.767 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:32.767 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.767 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:32.767 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:32.767 17:36:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.767 17:36:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:32.767 17:36:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:06:33.025 /dev/nbd11 00:06:33.025 17:36:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:33.025 17:36:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:33.025 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:06:33.025 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:33.025 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:33.025 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:33.025 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:06:33.025 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:33.025 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:33.025 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:33.025 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:33.025 1+0 records in 00:06:33.025 1+0 records out 00:06:33.025 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000498316 s, 8.2 MB/s 00:06:33.025 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.025 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:33.025 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.025 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:33.026 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:33.026 17:36:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.026 17:36:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:33.026 17:36:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:06:33.026 /dev/nbd12 00:06:33.026 17:36:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:33.026 17:36:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:33.026 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:06:33.026 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:33.026 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:33.026 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:33.026 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:06:33.026 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:33.026 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:33.026 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:33.026 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:33.026 1+0 records in 00:06:33.026 1+0 records out 00:06:33.026 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0005395 s, 7.6 MB/s 00:06:33.026 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.026 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:33.026 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.026 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:33.026 17:36:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:33.026 17:36:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.026 17:36:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:33.026 17:36:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:06:33.283 /dev/nbd13 00:06:33.283 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:33.283 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:33.283 17:36:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:06:33.283 17:36:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:33.283 17:36:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:33.283 17:36:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:33.283 17:36:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:06:33.283 17:36:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:33.283 17:36:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:33.283 17:36:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:33.283 17:36:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:33.283 1+0 records in 00:06:33.283 1+0 records out 00:06:33.283 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000874404 s, 4.7 MB/s 00:06:33.283 17:36:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.283 17:36:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:33.283 17:36:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.283 17:36:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:33.283 17:36:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:33.283 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.283 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:33.283 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:06:33.540 /dev/nbd14 00:06:33.540 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:06:33.540 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:06:33.540 17:36:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:06:33.540 17:36:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:33.540 17:36:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:33.540 17:36:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:33.540 17:36:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:06:33.540 17:36:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:33.540 17:36:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:33.540 17:36:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:33.540 17:36:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:33.540 1+0 records in 00:06:33.540 1+0 records out 00:06:33.540 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000600525 s, 6.8 MB/s 00:06:33.540 17:36:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.540 17:36:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:33.540 17:36:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.540 17:36:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:33.540 17:36:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:33.540 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.541 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:33.541 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:33.541 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.541 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.798 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:33.798 { 00:06:33.798 "nbd_device": "/dev/nbd0", 00:06:33.798 "bdev_name": "Nvme0n1" 00:06:33.798 }, 00:06:33.798 { 00:06:33.798 "nbd_device": "/dev/nbd1", 00:06:33.798 "bdev_name": "Nvme1n1p1" 00:06:33.798 }, 00:06:33.798 { 00:06:33.798 "nbd_device": "/dev/nbd10", 00:06:33.798 "bdev_name": "Nvme1n1p2" 00:06:33.798 }, 00:06:33.798 { 00:06:33.798 "nbd_device": "/dev/nbd11", 00:06:33.798 "bdev_name": "Nvme2n1" 00:06:33.798 }, 00:06:33.798 { 00:06:33.798 "nbd_device": "/dev/nbd12", 00:06:33.798 "bdev_name": "Nvme2n2" 00:06:33.798 }, 00:06:33.798 { 00:06:33.798 "nbd_device": "/dev/nbd13", 00:06:33.798 "bdev_name": "Nvme2n3" 00:06:33.798 }, 00:06:33.798 { 00:06:33.798 "nbd_device": "/dev/nbd14", 00:06:33.798 "bdev_name": "Nvme3n1" 00:06:33.798 } 00:06:33.798 ]' 00:06:33.798 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:33.798 { 00:06:33.798 "nbd_device": "/dev/nbd0", 00:06:33.798 "bdev_name": "Nvme0n1" 00:06:33.798 }, 00:06:33.798 { 00:06:33.798 "nbd_device": "/dev/nbd1", 00:06:33.798 "bdev_name": "Nvme1n1p1" 00:06:33.798 }, 00:06:33.798 { 00:06:33.798 "nbd_device": "/dev/nbd10", 00:06:33.798 "bdev_name": "Nvme1n1p2" 00:06:33.798 }, 00:06:33.798 { 00:06:33.798 "nbd_device": "/dev/nbd11", 00:06:33.798 "bdev_name": "Nvme2n1" 00:06:33.798 }, 00:06:33.798 { 00:06:33.798 "nbd_device": "/dev/nbd12", 00:06:33.798 "bdev_name": "Nvme2n2" 00:06:33.798 }, 00:06:33.798 { 00:06:33.798 "nbd_device": "/dev/nbd13", 00:06:33.798 "bdev_name": "Nvme2n3" 00:06:33.798 }, 00:06:33.798 { 00:06:33.798 "nbd_device": "/dev/nbd14", 00:06:33.798 "bdev_name": "Nvme3n1" 00:06:33.798 } 00:06:33.798 ]' 00:06:33.798 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.798 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:33.798 /dev/nbd1 00:06:33.798 /dev/nbd10 00:06:33.798 /dev/nbd11 00:06:33.798 /dev/nbd12 00:06:33.798 /dev/nbd13 00:06:33.798 /dev/nbd14' 00:06:33.798 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:33.798 /dev/nbd1 00:06:33.798 /dev/nbd10 00:06:33.798 /dev/nbd11 00:06:33.798 /dev/nbd12 00:06:33.798 /dev/nbd13 00:06:33.798 /dev/nbd14' 00:06:33.798 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.798 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:06:33.798 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:06:33.798 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:06:33.798 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:06:33.798 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:06:33.798 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:33.798 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.798 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:33.798 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:33.798 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:33.798 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:33.798 256+0 records in 00:06:33.798 256+0 records out 00:06:33.798 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00415629 s, 252 MB/s 00:06:33.798 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.798 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:33.798 256+0 records in 00:06:33.798 256+0 records out 00:06:33.798 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0929086 s, 11.3 MB/s 00:06:33.798 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.798 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:34.056 256+0 records in 00:06:34.056 256+0 records out 00:06:34.056 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0851144 s, 12.3 MB/s 00:06:34.056 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:34.056 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:34.056 256+0 records in 00:06:34.056 256+0 records out 00:06:34.056 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0827435 s, 12.7 MB/s 00:06:34.056 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:34.056 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:34.056 256+0 records in 00:06:34.056 256+0 records out 00:06:34.056 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0828909 s, 12.7 MB/s 00:06:34.056 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:34.056 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:34.313 256+0 records in 00:06:34.313 256+0 records out 00:06:34.313 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0831806 s, 12.6 MB/s 00:06:34.313 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:34.313 17:36:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:34.313 256+0 records in 00:06:34.313 256+0 records out 00:06:34.313 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0867565 s, 12.1 MB/s 00:06:34.313 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:34.313 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:06:34.571 256+0 records in 00:06:34.571 256+0 records out 00:06:34.571 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.105001 s, 10.0 MB/s 00:06:34.571 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:06:34.571 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:34.571 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:34.571 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:34.571 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:34.571 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:34.571 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:34.571 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:34.571 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:34.571 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:34.571 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:34.571 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:34.571 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:34.571 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:34.571 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:34.571 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:34.571 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:34.571 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:34.571 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:34.571 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:34.571 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:06:34.571 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:34.571 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:34.571 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.571 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:34.571 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:34.571 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:34.571 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.571 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:34.571 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:34.571 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:34.571 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:34.571 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.571 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.571 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:34.571 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:34.571 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.571 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.571 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:34.829 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:34.829 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:34.829 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:34.829 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.829 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.829 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:34.829 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:34.829 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.829 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.829 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:35.086 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:35.086 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:35.087 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:35.087 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.087 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.087 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:35.087 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:35.087 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.087 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.087 17:36:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:35.398 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:35.398 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:35.398 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:35.398 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.398 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.398 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:35.398 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:35.398 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.399 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.399 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:35.661 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:35.661 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:35.661 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:35.661 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.661 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.661 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:35.661 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:35.661 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.661 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.661 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:35.661 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:35.918 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:35.918 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:35.918 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.918 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.918 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:35.918 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:35.918 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.918 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.918 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:06:35.918 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:06:35.918 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:06:35.918 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:06:35.918 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.918 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.918 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:06:35.918 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:35.918 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.918 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:35.918 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.918 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.176 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:36.176 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:36.176 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.176 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:36.176 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:36.176 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.176 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:36.176 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:36.176 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:36.176 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:36.176 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:36.176 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:36.176 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:36.176 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.176 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:36.176 17:36:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:36.433 malloc_lvol_verify 00:06:36.433 17:36:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:36.690 86c7c19f-8eaf-4236-94b7-00b7c7e3786a 00:06:36.690 17:36:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:36.947 51a4e705-b40b-4d5c-8ba8-aca989d9d585 00:06:36.947 17:36:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:37.205 /dev/nbd0 00:06:37.205 17:36:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:37.205 17:36:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:37.205 17:36:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:37.205 17:36:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:37.205 17:36:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:37.205 mke2fs 1.47.0 (5-Feb-2023) 00:06:37.205 Discarding device blocks: 0/4096 done 00:06:37.205 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:37.205 00:06:37.205 Allocating group tables: 0/1 done 00:06:37.205 Writing inode tables: 0/1 done 00:06:37.205 Creating journal (1024 blocks): done 00:06:37.205 Writing superblocks and filesystem accounting information: 0/1 done 00:06:37.205 00:06:37.205 17:36:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:37.205 17:36:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.205 17:36:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:37.205 17:36:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:37.205 17:36:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:37.205 17:36:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.205 17:36:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:37.464 17:36:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:37.464 17:36:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:37.464 17:36:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:37.464 17:36:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.464 17:36:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.464 17:36:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:37.464 17:36:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:37.464 17:36:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.464 17:36:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61828 00:06:37.464 17:36:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 61828 ']' 00:06:37.464 17:36:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 61828 00:06:37.464 17:36:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:06:37.464 17:36:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:37.464 17:36:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61828 00:06:37.464 killing process with pid 61828 00:06:37.464 17:36:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:37.464 17:36:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:37.464 17:36:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61828' 00:06:37.464 17:36:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 61828 00:06:37.465 17:36:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 61828 00:06:38.029 17:36:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:38.029 00:06:38.029 real 0m10.590s 00:06:38.029 user 0m15.115s 00:06:38.029 sys 0m3.505s 00:06:38.029 17:36:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.029 17:36:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:38.029 ************************************ 00:06:38.029 END TEST bdev_nbd 00:06:38.029 ************************************ 00:06:38.029 17:36:27 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:38.029 skipping fio tests on NVMe due to multi-ns failures. 00:06:38.029 17:36:27 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:06:38.029 17:36:27 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:06:38.029 17:36:27 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:38.029 17:36:27 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:38.030 17:36:27 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:38.030 17:36:27 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:06:38.030 17:36:27 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.030 17:36:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:38.030 ************************************ 00:06:38.030 START TEST bdev_verify 00:06:38.030 ************************************ 00:06:38.030 17:36:27 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:38.030 [2024-10-13 17:36:27.819370] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:38.030 [2024-10-13 17:36:27.819497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62236 ] 00:06:38.287 [2024-10-13 17:36:27.968072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:38.287 [2024-10-13 17:36:28.078067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.287 [2024-10-13 17:36:28.078147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.219 Running I/O for 5 seconds... 00:06:41.080 22016.00 IOPS, 86.00 MiB/s [2024-10-13T17:36:32.283Z] 24064.00 IOPS, 94.00 MiB/s [2024-10-13T17:36:33.215Z] 23530.67 IOPS, 91.92 MiB/s [2024-10-13T17:36:34.149Z] 23264.00 IOPS, 90.88 MiB/s [2024-10-13T17:36:34.149Z] 22899.20 IOPS, 89.45 MiB/s 00:06:44.335 Latency(us) 00:06:44.335 [2024-10-13T17:36:34.149Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:44.335 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:44.335 Verification LBA range: start 0x0 length 0xbd0bd 00:06:44.335 Nvme0n1 : 5.07 1603.50 6.26 0.00 0.00 79390.51 13308.85 85902.57 00:06:44.335 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:44.335 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:44.335 Nvme0n1 : 5.08 1636.32 6.39 0.00 0.00 78036.09 14619.57 100018.02 00:06:44.335 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:44.335 Verification LBA range: start 0x0 length 0x4ff80 00:06:44.335 Nvme1n1p1 : 5.09 1610.06 6.29 0.00 0.00 79199.70 15627.82 81466.29 00:06:44.335 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:44.335 Verification LBA range: start 0x4ff80 length 0x4ff80 00:06:44.335 Nvme1n1p1 : 5.09 1635.72 6.39 0.00 0.00 77846.50 16736.89 86709.17 00:06:44.335 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:44.335 Verification LBA range: start 0x0 length 0x4ff7f 00:06:44.335 Nvme1n1p2 : 5.09 1608.75 6.28 0.00 0.00 79061.85 16535.24 72190.42 00:06:44.335 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:44.335 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:06:44.335 Nvme1n1p2 : 5.09 1635.19 6.39 0.00 0.00 77732.60 16837.71 84289.38 00:06:44.335 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:44.335 Verification LBA range: start 0x0 length 0x80000 00:06:44.335 Nvme2n1 : 5.09 1608.29 6.28 0.00 0.00 78952.48 16131.94 70577.23 00:06:44.335 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:44.335 Verification LBA range: start 0x80000 length 0x80000 00:06:44.335 Nvme2n1 : 5.09 1633.38 6.38 0.00 0.00 77632.32 19156.68 76626.71 00:06:44.335 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:44.335 Verification LBA range: start 0x0 length 0x80000 00:06:44.335 Nvme2n2 : 5.10 1606.45 6.28 0.00 0.00 78836.36 19761.62 75013.51 00:06:44.335 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:44.335 Verification LBA range: start 0x80000 length 0x80000 00:06:44.335 Nvme2n2 : 5.10 1632.09 6.38 0.00 0.00 77500.75 18148.43 70173.93 00:06:44.335 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:44.335 Verification LBA range: start 0x0 length 0x80000 00:06:44.335 Nvme2n3 : 5.10 1605.39 6.27 0.00 0.00 78702.80 18652.55 79449.80 00:06:44.335 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:44.335 Verification LBA range: start 0x80000 length 0x80000 00:06:44.335 Nvme2n3 : 5.10 1631.63 6.37 0.00 0.00 77351.55 16535.24 75013.51 00:06:44.335 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:44.335 Verification LBA range: start 0x0 length 0x20000 00:06:44.335 Nvme3n1 : 5.11 1604.41 6.27 0.00 0.00 78578.97 13611.32 83482.78 00:06:44.335 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:44.335 Verification LBA range: start 0x20000 length 0x20000 00:06:44.335 Nvme3n1 : 5.10 1630.58 6.37 0.00 0.00 77241.23 14720.39 77030.01 00:06:44.335 [2024-10-13T17:36:34.149Z] =================================================================================================================== 00:06:44.335 [2024-10-13T17:36:34.149Z] Total : 22681.76 88.60 0.00 0.00 78284.46 13308.85 100018.02 00:06:45.709 00:06:45.709 real 0m7.353s 00:06:45.709 user 0m13.769s 00:06:45.709 sys 0m0.230s 00:06:45.709 ************************************ 00:06:45.709 END TEST bdev_verify 00:06:45.709 ************************************ 00:06:45.709 17:36:35 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.709 17:36:35 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:45.709 17:36:35 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:45.709 17:36:35 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:06:45.709 17:36:35 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.709 17:36:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:45.709 ************************************ 00:06:45.709 START TEST bdev_verify_big_io 00:06:45.709 ************************************ 00:06:45.709 17:36:35 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:45.709 [2024-10-13 17:36:35.238571] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:45.709 [2024-10-13 17:36:35.238695] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62330 ] 00:06:45.709 [2024-10-13 17:36:35.390264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:45.709 [2024-10-13 17:36:35.492542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.709 [2024-10-13 17:36:35.492588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.643 Running I/O for 5 seconds... 00:06:52.763 2236.00 IOPS, 139.75 MiB/s [2024-10-13T17:36:42.836Z] 3238.50 IOPS, 202.41 MiB/s 00:06:53.022 Latency(us) 00:06:53.022 [2024-10-13T17:36:42.836Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:53.022 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:53.022 Verification LBA range: start 0x0 length 0xbd0b 00:06:53.022 Nvme0n1 : 5.89 105.02 6.56 0.00 0.00 1136001.16 12149.37 1677721.60 00:06:53.022 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:53.022 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:53.022 Nvme0n1 : 6.10 83.94 5.25 0.00 0.00 1435418.29 19156.68 1548666.09 00:06:53.022 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:53.022 Verification LBA range: start 0x0 length 0x4ff8 00:06:53.022 Nvme1n1p1 : 6.02 104.58 6.54 0.00 0.00 1073953.08 98404.82 1335724.50 00:06:53.022 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:53.022 Verification LBA range: start 0x4ff8 length 0x4ff8 00:06:53.022 Nvme1n1p1 : 6.32 58.25 3.64 0.00 0.00 1992406.18 137121.48 2555299.05 00:06:53.022 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:53.022 Verification LBA range: start 0x0 length 0x4ff7 00:06:53.022 Nvme1n1p2 : 6.11 106.91 6.68 0.00 0.00 1016572.14 81062.99 1884210.41 00:06:53.022 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:53.022 Verification LBA range: start 0x4ff7 length 0x4ff7 00:06:53.022 Nvme1n1p2 : 6.14 78.01 4.88 0.00 0.00 1465291.08 167772.16 2335904.69 00:06:53.022 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:53.022 Verification LBA range: start 0x0 length 0x8000 00:06:53.022 Nvme2n1 : 6.30 113.25 7.08 0.00 0.00 916028.06 55655.19 1910021.51 00:06:53.022 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:53.022 Verification LBA range: start 0x8000 length 0x8000 00:06:53.022 Nvme2n1 : 6.14 92.80 5.80 0.00 0.00 1218819.78 37708.41 1606741.07 00:06:53.022 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:53.022 Verification LBA range: start 0x0 length 0x8000 00:06:53.022 Nvme2n2 : 6.40 128.74 8.05 0.00 0.00 790375.36 19257.50 1922927.06 00:06:53.022 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:53.022 Verification LBA range: start 0x8000 length 0x8000 00:06:53.022 Nvme2n2 : 6.25 98.37 6.15 0.00 0.00 1107130.02 29844.09 1361535.61 00:06:53.022 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:53.022 Verification LBA range: start 0x0 length 0x8000 00:06:53.022 Nvme2n3 : 6.45 141.71 8.86 0.00 0.00 694215.42 8670.92 1948738.17 00:06:53.022 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:53.022 Verification LBA range: start 0x8000 length 0x8000 00:06:53.022 Nvme2n3 : 6.25 98.51 6.16 0.00 0.00 1062688.50 29037.49 1380893.93 00:06:53.022 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:53.022 Verification LBA range: start 0x0 length 0x2000 00:06:53.022 Nvme3n1 : 6.56 198.58 12.41 0.00 0.00 487614.57 850.71 1961643.72 00:06:53.022 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:53.022 Verification LBA range: start 0x2000 length 0x2000 00:06:53.022 Nvme3n1 : 6.33 118.21 7.39 0.00 0.00 847187.34 4032.98 1413157.81 00:06:53.022 [2024-10-13T17:36:42.836Z] =================================================================================================================== 00:06:53.022 [2024-10-13T17:36:42.836Z] Total : 1526.86 95.43 0.00 0.00 988261.72 850.71 2555299.05 00:06:54.923 00:06:54.923 real 0m9.199s 00:06:54.923 user 0m17.460s 00:06:54.923 sys 0m0.227s 00:06:54.923 17:36:44 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.923 17:36:44 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:54.923 ************************************ 00:06:54.923 END TEST bdev_verify_big_io 00:06:54.923 ************************************ 00:06:54.923 17:36:44 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:54.923 17:36:44 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:54.923 17:36:44 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.923 17:36:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:54.923 ************************************ 00:06:54.923 START TEST bdev_write_zeroes 00:06:54.923 ************************************ 00:06:54.923 17:36:44 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:54.923 [2024-10-13 17:36:44.483793] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:54.923 [2024-10-13 17:36:44.483917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62450 ] 00:06:54.923 [2024-10-13 17:36:44.636017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.182 [2024-10-13 17:36:44.743320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.746 Running I/O for 1 seconds... 00:06:56.678 64960.00 IOPS, 253.75 MiB/s 00:06:56.678 Latency(us) 00:06:56.678 [2024-10-13T17:36:46.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:56.678 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:56.678 Nvme0n1 : 1.03 9224.96 36.04 0.00 0.00 13844.97 11645.24 26214.40 00:06:56.678 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:56.678 Nvme1n1p1 : 1.03 9213.20 35.99 0.00 0.00 13840.46 11342.77 25508.63 00:06:56.678 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:56.678 Nvme1n1p2 : 1.03 9200.37 35.94 0.00 0.00 13827.54 11393.18 24702.03 00:06:56.678 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:56.678 Nvme2n1 : 1.03 9189.19 35.90 0.00 0.00 13817.45 11645.24 23996.26 00:06:56.678 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:56.678 Nvme2n2 : 1.03 9177.41 35.85 0.00 0.00 13763.30 9628.75 23492.14 00:06:56.678 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:56.678 Nvme2n3 : 1.03 9166.35 35.81 0.00 0.00 13758.28 8570.09 24399.56 00:06:56.678 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:56.678 Nvme3n1 : 1.03 9154.52 35.76 0.00 0.00 13731.31 7410.61 26012.75 00:06:56.678 [2024-10-13T17:36:46.492Z] =================================================================================================================== 00:06:56.678 [2024-10-13T17:36:46.492Z] Total : 64326.01 251.27 0.00 0.00 13797.61 7410.61 26214.40 00:06:57.643 00:06:57.643 real 0m2.753s 00:06:57.643 user 0m2.421s 00:06:57.643 sys 0m0.217s 00:06:57.643 17:36:47 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.643 17:36:47 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:57.643 ************************************ 00:06:57.643 END TEST bdev_write_zeroes 00:06:57.643 ************************************ 00:06:57.643 17:36:47 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:57.643 17:36:47 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:57.643 17:36:47 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.643 17:36:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:57.643 ************************************ 00:06:57.643 START TEST bdev_json_nonenclosed 00:06:57.643 ************************************ 00:06:57.643 17:36:47 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:57.643 [2024-10-13 17:36:47.283730] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:57.643 [2024-10-13 17:36:47.283844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62503 ] 00:06:57.643 [2024-10-13 17:36:47.433909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.900 [2024-10-13 17:36:47.549259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.900 [2024-10-13 17:36:47.549347] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:57.900 [2024-10-13 17:36:47.549364] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:57.900 [2024-10-13 17:36:47.549374] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:58.158 00:06:58.158 real 0m0.520s 00:06:58.158 user 0m0.318s 00:06:58.158 sys 0m0.098s 00:06:58.158 17:36:47 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.158 17:36:47 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:58.158 ************************************ 00:06:58.158 END TEST bdev_json_nonenclosed 00:06:58.158 ************************************ 00:06:58.158 17:36:47 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:58.158 17:36:47 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:58.158 17:36:47 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.158 17:36:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:58.158 ************************************ 00:06:58.158 START TEST bdev_json_nonarray 00:06:58.158 ************************************ 00:06:58.158 17:36:47 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:58.158 [2024-10-13 17:36:47.843948] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:58.158 [2024-10-13 17:36:47.844074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62529 ] 00:06:58.417 [2024-10-13 17:36:47.987470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.417 [2024-10-13 17:36:48.084911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.417 [2024-10-13 17:36:48.084995] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:58.417 [2024-10-13 17:36:48.085012] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:58.417 [2024-10-13 17:36:48.085021] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:58.677 00:06:58.677 real 0m0.481s 00:06:58.677 user 0m0.282s 00:06:58.677 sys 0m0.096s 00:06:58.677 17:36:48 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.677 ************************************ 00:06:58.677 17:36:48 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:58.677 END TEST bdev_json_nonarray 00:06:58.677 ************************************ 00:06:58.677 17:36:48 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:06:58.677 17:36:48 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:06:58.677 17:36:48 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:06:58.677 17:36:48 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.677 17:36:48 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.677 17:36:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:58.677 ************************************ 00:06:58.677 START TEST bdev_gpt_uuid 00:06:58.677 ************************************ 00:06:58.677 17:36:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:06:58.677 17:36:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:06:58.677 17:36:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:06:58.677 17:36:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62554 00:06:58.677 17:36:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:58.677 17:36:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62554 00:06:58.677 17:36:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 62554 ']' 00:06:58.677 17:36:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.677 17:36:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.677 17:36:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.677 17:36:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.677 17:36:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:58.677 17:36:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:58.677 [2024-10-13 17:36:48.417513] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:58.677 [2024-10-13 17:36:48.417672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62554 ] 00:06:58.937 [2024-10-13 17:36:48.571102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.937 [2024-10-13 17:36:48.703581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.870 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:59.870 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:06:59.870 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:59.870 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.870 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:59.870 Some configs were skipped because the RPC state that can call them passed over. 00:06:59.870 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.870 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:06:59.870 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.870 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:59.870 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.870 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:06:59.870 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.870 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:59.870 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.870 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:06:59.870 { 00:06:59.870 "name": "Nvme1n1p1", 00:06:59.870 "aliases": [ 00:06:59.870 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:06:59.870 ], 00:06:59.870 "product_name": "GPT Disk", 00:06:59.870 "block_size": 4096, 00:06:59.870 "num_blocks": 655104, 00:06:59.870 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:06:59.870 "assigned_rate_limits": { 00:06:59.870 "rw_ios_per_sec": 0, 00:06:59.870 "rw_mbytes_per_sec": 0, 00:06:59.870 "r_mbytes_per_sec": 0, 00:06:59.870 "w_mbytes_per_sec": 0 00:06:59.870 }, 00:06:59.870 "claimed": false, 00:06:59.870 "zoned": false, 00:06:59.870 "supported_io_types": { 00:06:59.870 "read": true, 00:06:59.870 "write": true, 00:06:59.870 "unmap": true, 00:06:59.870 "flush": true, 00:06:59.870 "reset": true, 00:06:59.870 "nvme_admin": false, 00:06:59.870 "nvme_io": false, 00:06:59.870 "nvme_io_md": false, 00:06:59.870 "write_zeroes": true, 00:06:59.870 "zcopy": false, 00:06:59.870 "get_zone_info": false, 00:06:59.870 "zone_management": false, 00:06:59.870 "zone_append": false, 00:06:59.870 "compare": true, 00:06:59.870 "compare_and_write": false, 00:06:59.870 "abort": true, 00:06:59.870 "seek_hole": false, 00:06:59.870 "seek_data": false, 00:06:59.870 "copy": true, 00:06:59.870 "nvme_iov_md": false 00:06:59.870 }, 00:06:59.870 "driver_specific": { 00:06:59.870 "gpt": { 00:06:59.870 "base_bdev": "Nvme1n1", 00:06:59.870 "offset_blocks": 256, 00:06:59.870 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:06:59.870 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:06:59.870 "partition_name": "SPDK_TEST_first" 00:06:59.870 } 00:06:59.870 } 00:06:59.870 } 00:06:59.870 ]' 00:07:00.129 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:00.129 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:00.129 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:00.129 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:00.129 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:00.129 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:00.129 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:00.129 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.129 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:00.129 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.129 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:00.129 { 00:07:00.129 "name": "Nvme1n1p2", 00:07:00.129 "aliases": [ 00:07:00.129 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:00.129 ], 00:07:00.129 "product_name": "GPT Disk", 00:07:00.129 "block_size": 4096, 00:07:00.129 "num_blocks": 655103, 00:07:00.129 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:00.129 "assigned_rate_limits": { 00:07:00.129 "rw_ios_per_sec": 0, 00:07:00.129 "rw_mbytes_per_sec": 0, 00:07:00.129 "r_mbytes_per_sec": 0, 00:07:00.129 "w_mbytes_per_sec": 0 00:07:00.129 }, 00:07:00.129 "claimed": false, 00:07:00.129 "zoned": false, 00:07:00.129 "supported_io_types": { 00:07:00.129 "read": true, 00:07:00.129 "write": true, 00:07:00.129 "unmap": true, 00:07:00.129 "flush": true, 00:07:00.129 "reset": true, 00:07:00.129 "nvme_admin": false, 00:07:00.129 "nvme_io": false, 00:07:00.129 "nvme_io_md": false, 00:07:00.129 "write_zeroes": true, 00:07:00.129 "zcopy": false, 00:07:00.129 "get_zone_info": false, 00:07:00.129 "zone_management": false, 00:07:00.129 "zone_append": false, 00:07:00.129 "compare": true, 00:07:00.129 "compare_and_write": false, 00:07:00.129 "abort": true, 00:07:00.129 "seek_hole": false, 00:07:00.129 "seek_data": false, 00:07:00.129 "copy": true, 00:07:00.129 "nvme_iov_md": false 00:07:00.129 }, 00:07:00.129 "driver_specific": { 00:07:00.129 "gpt": { 00:07:00.129 "base_bdev": "Nvme1n1", 00:07:00.129 "offset_blocks": 655360, 00:07:00.129 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:00.129 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:00.129 "partition_name": "SPDK_TEST_second" 00:07:00.129 } 00:07:00.129 } 00:07:00.129 } 00:07:00.129 ]' 00:07:00.129 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:00.129 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:00.129 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:00.129 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:00.129 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:00.129 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:00.129 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 62554 00:07:00.129 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 62554 ']' 00:07:00.129 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 62554 00:07:00.129 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:07:00.129 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:00.129 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62554 00:07:00.129 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:00.129 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:00.129 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62554' 00:07:00.129 killing process with pid 62554 00:07:00.129 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 62554 00:07:00.129 17:36:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 62554 00:07:02.029 00:07:02.029 real 0m3.121s 00:07:02.029 user 0m3.206s 00:07:02.029 sys 0m0.430s 00:07:02.029 17:36:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.029 17:36:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:02.029 ************************************ 00:07:02.029 END TEST bdev_gpt_uuid 00:07:02.029 ************************************ 00:07:02.029 17:36:51 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:02.029 17:36:51 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:02.029 17:36:51 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:02.029 17:36:51 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:02.029 17:36:51 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:02.029 17:36:51 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:02.029 17:36:51 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:02.029 17:36:51 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:02.029 17:36:51 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:02.029 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:02.287 Waiting for block devices as requested 00:07:02.287 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:02.287 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:02.544 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:02.544 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:07.805 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:07.805 17:36:57 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:07.805 17:36:57 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:07.805 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:07.805 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:07.805 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:07.805 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:07.805 17:36:57 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:07.805 00:07:07.805 real 0m55.698s 00:07:07.805 user 1m11.891s 00:07:07.805 sys 0m7.812s 00:07:07.805 17:36:57 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.805 ************************************ 00:07:07.805 END TEST blockdev_nvme_gpt 00:07:07.805 ************************************ 00:07:07.805 17:36:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:07.805 17:36:57 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:07.805 17:36:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.805 17:36:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.805 17:36:57 -- common/autotest_common.sh@10 -- # set +x 00:07:07.805 ************************************ 00:07:07.805 START TEST nvme 00:07:07.805 ************************************ 00:07:07.805 17:36:57 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:08.063 * Looking for test storage... 00:07:08.063 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:08.063 17:36:57 nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:08.063 17:36:57 nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:07:08.063 17:36:57 nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:08.063 17:36:57 nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:08.063 17:36:57 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.064 17:36:57 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.064 17:36:57 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.064 17:36:57 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.064 17:36:57 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.064 17:36:57 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.064 17:36:57 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.064 17:36:57 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.064 17:36:57 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.064 17:36:57 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.064 17:36:57 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.064 17:36:57 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:08.064 17:36:57 nvme -- scripts/common.sh@345 -- # : 1 00:07:08.064 17:36:57 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.064 17:36:57 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.064 17:36:57 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:08.064 17:36:57 nvme -- scripts/common.sh@353 -- # local d=1 00:07:08.064 17:36:57 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.064 17:36:57 nvme -- scripts/common.sh@355 -- # echo 1 00:07:08.064 17:36:57 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.064 17:36:57 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:08.064 17:36:57 nvme -- scripts/common.sh@353 -- # local d=2 00:07:08.064 17:36:57 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.064 17:36:57 nvme -- scripts/common.sh@355 -- # echo 2 00:07:08.064 17:36:57 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.064 17:36:57 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.064 17:36:57 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.064 17:36:57 nvme -- scripts/common.sh@368 -- # return 0 00:07:08.064 17:36:57 nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.064 17:36:57 nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:08.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.064 --rc genhtml_branch_coverage=1 00:07:08.064 --rc genhtml_function_coverage=1 00:07:08.064 --rc genhtml_legend=1 00:07:08.064 --rc geninfo_all_blocks=1 00:07:08.064 --rc geninfo_unexecuted_blocks=1 00:07:08.064 00:07:08.064 ' 00:07:08.064 17:36:57 nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:08.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.064 --rc genhtml_branch_coverage=1 00:07:08.064 --rc genhtml_function_coverage=1 00:07:08.064 --rc genhtml_legend=1 00:07:08.064 --rc geninfo_all_blocks=1 00:07:08.064 --rc geninfo_unexecuted_blocks=1 00:07:08.064 00:07:08.064 ' 00:07:08.064 17:36:57 nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:08.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.064 --rc genhtml_branch_coverage=1 00:07:08.064 --rc genhtml_function_coverage=1 00:07:08.064 --rc genhtml_legend=1 00:07:08.064 --rc geninfo_all_blocks=1 00:07:08.064 --rc geninfo_unexecuted_blocks=1 00:07:08.064 00:07:08.064 ' 00:07:08.064 17:36:57 nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:08.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.064 --rc genhtml_branch_coverage=1 00:07:08.064 --rc genhtml_function_coverage=1 00:07:08.064 --rc genhtml_legend=1 00:07:08.064 --rc geninfo_all_blocks=1 00:07:08.064 --rc geninfo_unexecuted_blocks=1 00:07:08.064 00:07:08.064 ' 00:07:08.064 17:36:57 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:08.629 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:09.195 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:09.195 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:09.195 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:09.195 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:09.195 17:36:58 nvme -- nvme/nvme.sh@79 -- # uname 00:07:09.195 17:36:58 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:09.195 17:36:58 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:09.195 17:36:58 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:09.195 17:36:58 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:09.195 17:36:58 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:07:09.195 17:36:58 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:07:09.195 Waiting for stub to ready for secondary processes... 00:07:09.195 17:36:58 nvme -- common/autotest_common.sh@1071 -- # stubpid=63189 00:07:09.195 17:36:58 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:07:09.195 17:36:58 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:09.195 17:36:58 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/63189 ]] 00:07:09.195 17:36:58 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:07:09.195 17:36:58 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:09.195 [2024-10-13 17:36:58.864805] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:07:09.195 [2024-10-13 17:36:58.864933] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:10.128 [2024-10-13 17:36:59.812576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:10.128 17:36:59 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:10.128 17:36:59 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/63189 ]] 00:07:10.128 17:36:59 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:07:10.128 [2024-10-13 17:36:59.917814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.128 [2024-10-13 17:36:59.918063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:10.128 [2024-10-13 17:36:59.918157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.128 [2024-10-13 17:36:59.931443] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:10.128 [2024-10-13 17:36:59.931627] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:10.386 [2024-10-13 17:36:59.945600] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:10.386 [2024-10-13 17:36:59.945835] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:10.386 [2024-10-13 17:36:59.948358] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:10.386 [2024-10-13 17:36:59.948547] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:10.386 [2024-10-13 17:36:59.948643] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:10.386 [2024-10-13 17:36:59.950744] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:10.386 [2024-10-13 17:36:59.950909] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:10.386 [2024-10-13 17:36:59.950982] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:10.386 [2024-10-13 17:36:59.953252] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:10.386 [2024-10-13 17:36:59.953484] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:10.386 [2024-10-13 17:36:59.953577] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:10.386 [2024-10-13 17:36:59.953632] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:10.386 [2024-10-13 17:36:59.953684] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:11.320 17:37:00 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:11.320 done. 00:07:11.320 17:37:00 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:07:11.320 17:37:00 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:11.320 17:37:00 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:07:11.320 17:37:00 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.320 17:37:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:11.320 ************************************ 00:07:11.320 START TEST nvme_reset 00:07:11.320 ************************************ 00:07:11.320 17:37:00 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:11.320 Initializing NVMe Controllers 00:07:11.320 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:11.320 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:11.320 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:11.320 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:11.320 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:11.320 00:07:11.320 real 0m0.229s 00:07:11.320 user 0m0.072s 00:07:11.320 sys 0m0.106s 00:07:11.320 17:37:01 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.320 17:37:01 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:11.320 ************************************ 00:07:11.320 END TEST nvme_reset 00:07:11.320 ************************************ 00:07:11.320 17:37:01 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:11.320 17:37:01 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:11.320 17:37:01 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.320 17:37:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:11.320 ************************************ 00:07:11.320 START TEST nvme_identify 00:07:11.320 ************************************ 00:07:11.320 17:37:01 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:07:11.320 17:37:01 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:11.320 17:37:01 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:11.320 17:37:01 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:11.320 17:37:01 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:11.320 17:37:01 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:11.320 17:37:01 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:07:11.320 17:37:01 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:11.320 17:37:01 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:11.320 17:37:01 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:11.582 17:37:01 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:07:11.582 17:37:01 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:11.582 17:37:01 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:11.582 [2024-10-13 17:37:01.347092] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 63222 terminated unexpected 00:07:11.582 ===================================================== 00:07:11.582 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:11.582 ===================================================== 00:07:11.582 Controller Capabilities/Features 00:07:11.582 ================================ 00:07:11.582 Vendor ID: 1b36 00:07:11.582 Subsystem Vendor ID: 1af4 00:07:11.582 Serial Number: 12340 00:07:11.582 Model Number: QEMU NVMe Ctrl 00:07:11.582 Firmware Version: 8.0.0 00:07:11.582 Recommended Arb Burst: 6 00:07:11.582 IEEE OUI Identifier: 00 54 52 00:07:11.582 Multi-path I/O 00:07:11.582 May have multiple subsystem ports: No 00:07:11.582 May have multiple controllers: No 00:07:11.582 Associated with SR-IOV VF: No 00:07:11.582 Max Data Transfer Size: 524288 00:07:11.582 Max Number of Namespaces: 256 00:07:11.582 Max Number of I/O Queues: 64 00:07:11.582 NVMe Specification Version (VS): 1.4 00:07:11.582 NVMe Specification Version (Identify): 1.4 00:07:11.582 Maximum Queue Entries: 2048 00:07:11.582 Contiguous Queues Required: Yes 00:07:11.582 Arbitration Mechanisms Supported 00:07:11.582 Weighted Round Robin: Not Supported 00:07:11.582 Vendor Specific: Not Supported 00:07:11.582 Reset Timeout: 7500 ms 00:07:11.582 Doorbell Stride: 4 bytes 00:07:11.582 NVM Subsystem Reset: Not Supported 00:07:11.582 Command Sets Supported 00:07:11.582 NVM Command Set: Supported 00:07:11.582 Boot Partition: Not Supported 00:07:11.582 Memory Page Size Minimum: 4096 bytes 00:07:11.582 Memory Page Size Maximum: 65536 bytes 00:07:11.582 Persistent Memory Region: Not Supported 00:07:11.582 Optional Asynchronous Events Supported 00:07:11.582 Namespace Attribute Notices: Supported 00:07:11.582 Firmware Activation Notices: Not Supported 00:07:11.582 ANA Change Notices: Not Supported 00:07:11.582 PLE Aggregate Log Change Notices: Not Supported 00:07:11.582 LBA Status Info Alert Notices: Not Supported 00:07:11.582 EGE Aggregate Log Change Notices: Not Supported 00:07:11.582 Normal NVM Subsystem Shutdown event: Not Supported 00:07:11.582 Zone Descriptor Change Notices: Not Supported 00:07:11.582 Discovery Log Change Notices: Not Supported 00:07:11.582 Controller Attributes 00:07:11.582 128-bit Host Identifier: Not Supported 00:07:11.582 Non-Operational Permissive Mode: Not Supported 00:07:11.582 NVM Sets: Not Supported 00:07:11.582 Read Recovery Levels: Not Supported 00:07:11.582 Endurance Groups: Not Supported 00:07:11.582 Predictable Latency Mode: Not Supported 00:07:11.582 Traffic Based Keep ALive: Not Supported 00:07:11.582 Namespace Granularity: Not Supported 00:07:11.582 SQ Associations: Not Supported 00:07:11.582 UUID List: Not Supported 00:07:11.582 Multi-Domain Subsystem: Not Supported 00:07:11.582 Fixed Capacity Management: Not Supported 00:07:11.582 Variable Capacity Management: Not Supported 00:07:11.582 Delete Endurance Group: Not Supported 00:07:11.582 Delete NVM Set: Not Supported 00:07:11.582 Extended LBA Formats Supported: Supported 00:07:11.582 Flexible Data Placement Supported: Not Supported 00:07:11.582 00:07:11.582 Controller Memory Buffer Support 00:07:11.582 ================================ 00:07:11.582 Supported: No 00:07:11.582 00:07:11.582 Persistent Memory Region Support 00:07:11.582 ================================ 00:07:11.582 Supported: No 00:07:11.582 00:07:11.582 Admin Command Set Attributes 00:07:11.582 ============================ 00:07:11.582 Security Send/Receive: Not Supported 00:07:11.582 Format NVM: Supported 00:07:11.582 Firmware Activate/Download: Not Supported 00:07:11.582 Namespace Management: Supported 00:07:11.582 Device Self-Test: Not Supported 00:07:11.582 Directives: Supported 00:07:11.582 NVMe-MI: Not Supported 00:07:11.582 Virtualization Management: Not Supported 00:07:11.582 Doorbell Buffer Config: Supported 00:07:11.582 Get LBA Status Capability: Not Supported 00:07:11.582 Command & Feature Lockdown Capability: Not Supported 00:07:11.582 Abort Command Limit: 4 00:07:11.582 Async Event Request Limit: 4 00:07:11.582 Number of Firmware Slots: N/A 00:07:11.582 Firmware Slot 1 Read-Only: N/A 00:07:11.582 Firmware Activation Without Reset: N/A 00:07:11.582 Multiple Update Detection Support: N/A 00:07:11.582 Firmware Update Granularity: No Information Provided 00:07:11.582 Per-Namespace SMART Log: Yes 00:07:11.582 Asymmetric Namespace Access Log Page: Not Supported 00:07:11.582 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:11.582 Command Effects Log Page: Supported 00:07:11.582 Get Log Page Extended Data: Supported 00:07:11.582 Telemetry Log Pages: Not Supported 00:07:11.582 Persistent Event Log Pages: Not Supported 00:07:11.582 Supported Log Pages Log Page: May Support 00:07:11.582 Commands Supported & Effects Log Page: Not Supported 00:07:11.582 Feature Identifiers & Effects Log Page:May Support 00:07:11.582 NVMe-MI Commands & Effects Log Page: May Support 00:07:11.582 Data Area 4 for Telemetry Log: Not Supported 00:07:11.582 Error Log Page Entries Supported: 1 00:07:11.582 Keep Alive: Not Supported 00:07:11.582 00:07:11.582 NVM Command Set Attributes 00:07:11.582 ========================== 00:07:11.582 Submission Queue Entry Size 00:07:11.582 Max: 64 00:07:11.582 Min: 64 00:07:11.582 Completion Queue Entry Size 00:07:11.582 Max: 16 00:07:11.582 Min: 16 00:07:11.582 Number of Namespaces: 256 00:07:11.582 Compare Command: Supported 00:07:11.582 Write Uncorrectable Command: Not Supported 00:07:11.582 Dataset Management Command: Supported 00:07:11.582 Write Zeroes Command: Supported 00:07:11.582 Set Features Save Field: Supported 00:07:11.582 Reservations: Not Supported 00:07:11.582 Timestamp: Supported 00:07:11.582 Copy: Supported 00:07:11.582 Volatile Write Cache: Present 00:07:11.582 Atomic Write Unit (Normal): 1 00:07:11.582 Atomic Write Unit (PFail): 1 00:07:11.582 Atomic Compare & Write Unit: 1 00:07:11.582 Fused Compare & Write: Not Supported 00:07:11.582 Scatter-Gather List 00:07:11.582 SGL Command Set: Supported 00:07:11.582 SGL Keyed: Not Supported 00:07:11.582 SGL Bit Bucket Descriptor: Not Supported 00:07:11.582 SGL Metadata Pointer: Not Supported 00:07:11.582 Oversized SGL: Not Supported 00:07:11.582 SGL Metadata Address: Not Supported 00:07:11.582 SGL Offset: Not Supported 00:07:11.582 Transport SGL Data Block: Not Supported 00:07:11.582 Replay Protected Memory Block: Not Supported 00:07:11.582 00:07:11.582 Firmware Slot Information 00:07:11.582 ========================= 00:07:11.582 Active slot: 1 00:07:11.582 Slot 1 Firmware Revision: 1.0 00:07:11.582 00:07:11.582 00:07:11.582 Commands Supported and Effects 00:07:11.582 ============================== 00:07:11.582 Admin Commands 00:07:11.582 -------------- 00:07:11.582 Delete I/O Submission Queue (00h): Supported 00:07:11.582 Create I/O Submission Queue (01h): Supported 00:07:11.582 Get Log Page (02h): Supported 00:07:11.582 Delete I/O Completion Queue (04h): Supported 00:07:11.582 Create I/O Completion Queue (05h): Supported 00:07:11.582 Identify (06h): Supported 00:07:11.582 Abort (08h): Supported 00:07:11.582 Set Features (09h): Supported 00:07:11.582 Get Features (0Ah): Supported 00:07:11.582 Asynchronous Event Request (0Ch): Supported 00:07:11.582 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:11.582 Directive Send (19h): Supported 00:07:11.582 Directive Receive (1Ah): Supported 00:07:11.582 Virtualization Management (1Ch): Supported 00:07:11.582 Doorbell Buffer Config (7Ch): Supported 00:07:11.582 Format NVM (80h): Supported LBA-Change 00:07:11.582 I/O Commands 00:07:11.582 ------------ 00:07:11.582 Flush (00h): Supported LBA-Change 00:07:11.582 Write (01h): Supported LBA-Change 00:07:11.582 Read (02h): Supported 00:07:11.582 Compare (05h): Supported 00:07:11.583 Write Zeroes (08h): Supported LBA-Change 00:07:11.583 Dataset Management (09h): Supported LBA-Change 00:07:11.583 Unknown (0Ch): Supported 00:07:11.583 Unknown (12h): Supported 00:07:11.583 Copy (19h): Supported LBA-Change 00:07:11.583 Unknown (1Dh): Supported LBA-Change 00:07:11.583 00:07:11.583 Error Log 00:07:11.583 ========= 00:07:11.583 00:07:11.583 Arbitration 00:07:11.583 =========== 00:07:11.583 Arbitration Burst: no limit 00:07:11.583 00:07:11.583 Power Management 00:07:11.583 ================ 00:07:11.583 Number of Power States: 1 00:07:11.583 Current Power State: Power State #0 00:07:11.583 Power State #0: 00:07:11.583 Max Power: 25.00 W 00:07:11.583 Non-Operational State: Operational 00:07:11.583 Entry Latency: 16 microseconds 00:07:11.583 Exit Latency: 4 microseconds 00:07:11.583 Relative Read Throughput: 0 00:07:11.583 Relative Read Latency: 0 00:07:11.583 Relative Write Throughput: 0 00:07:11.583 Relative Write Latency: 0 00:07:11.583 Idle Power[2024-10-13 17:37:01.348170] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 63222 terminated unexpected 00:07:11.583 : Not Reported 00:07:11.583 Active Power: Not Reported 00:07:11.583 Non-Operational Permissive Mode: Not Supported 00:07:11.583 00:07:11.583 Health Information 00:07:11.583 ================== 00:07:11.583 Critical Warnings: 00:07:11.583 Available Spare Space: OK 00:07:11.583 Temperature: OK 00:07:11.583 Device Reliability: OK 00:07:11.583 Read Only: No 00:07:11.583 Volatile Memory Backup: OK 00:07:11.583 Current Temperature: 323 Kelvin (50 Celsius) 00:07:11.583 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:11.583 Available Spare: 0% 00:07:11.583 Available Spare Threshold: 0% 00:07:11.583 Life Percentage Used: 0% 00:07:11.583 Data Units Read: 669 00:07:11.583 Data Units Written: 597 00:07:11.583 Host Read Commands: 39263 00:07:11.583 Host Write Commands: 39049 00:07:11.583 Controller Busy Time: 0 minutes 00:07:11.583 Power Cycles: 0 00:07:11.583 Power On Hours: 0 hours 00:07:11.583 Unsafe Shutdowns: 0 00:07:11.583 Unrecoverable Media Errors: 0 00:07:11.583 Lifetime Error Log Entries: 0 00:07:11.583 Warning Temperature Time: 0 minutes 00:07:11.583 Critical Temperature Time: 0 minutes 00:07:11.583 00:07:11.583 Number of Queues 00:07:11.583 ================ 00:07:11.583 Number of I/O Submission Queues: 64 00:07:11.583 Number of I/O Completion Queues: 64 00:07:11.583 00:07:11.583 ZNS Specific Controller Data 00:07:11.583 ============================ 00:07:11.583 Zone Append Size Limit: 0 00:07:11.583 00:07:11.583 00:07:11.583 Active Namespaces 00:07:11.583 ================= 00:07:11.583 Namespace ID:1 00:07:11.583 Error Recovery Timeout: Unlimited 00:07:11.583 Command Set Identifier: NVM (00h) 00:07:11.583 Deallocate: Supported 00:07:11.583 Deallocated/Unwritten Error: Supported 00:07:11.583 Deallocated Read Value: All 0x00 00:07:11.583 Deallocate in Write Zeroes: Not Supported 00:07:11.583 Deallocated Guard Field: 0xFFFF 00:07:11.583 Flush: Supported 00:07:11.583 Reservation: Not Supported 00:07:11.583 Metadata Transferred as: Separate Metadata Buffer 00:07:11.583 Namespace Sharing Capabilities: Private 00:07:11.583 Size (in LBAs): 1548666 (5GiB) 00:07:11.583 Capacity (in LBAs): 1548666 (5GiB) 00:07:11.583 Utilization (in LBAs): 1548666 (5GiB) 00:07:11.583 Thin Provisioning: Not Supported 00:07:11.583 Per-NS Atomic Units: No 00:07:11.583 Maximum Single Source Range Length: 128 00:07:11.583 Maximum Copy Length: 128 00:07:11.583 Maximum Source Range Count: 128 00:07:11.583 NGUID/EUI64 Never Reused: No 00:07:11.583 Namespace Write Protected: No 00:07:11.583 Number of LBA Formats: 8 00:07:11.583 Current LBA Format: LBA Format #07 00:07:11.583 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:11.583 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:11.583 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:11.583 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:11.583 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:11.583 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:11.583 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:11.583 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:11.583 00:07:11.583 NVM Specific Namespace Data 00:07:11.583 =========================== 00:07:11.583 Logical Block Storage Tag Mask: 0 00:07:11.583 Protection Information Capabilities: 00:07:11.583 16b Guard Protection Information Storage Tag Support: No 00:07:11.583 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:11.583 Storage Tag Check Read Support: No 00:07:11.583 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.583 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.583 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.583 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.583 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.583 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.583 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.583 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.583 ===================================================== 00:07:11.583 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:11.583 ===================================================== 00:07:11.583 Controller Capabilities/Features 00:07:11.583 ================================ 00:07:11.583 Vendor ID: 1b36 00:07:11.583 Subsystem Vendor ID: 1af4 00:07:11.583 Serial Number: 12341 00:07:11.583 Model Number: QEMU NVMe Ctrl 00:07:11.583 Firmware Version: 8.0.0 00:07:11.583 Recommended Arb Burst: 6 00:07:11.583 IEEE OUI Identifier: 00 54 52 00:07:11.583 Multi-path I/O 00:07:11.583 May have multiple subsystem ports: No 00:07:11.583 May have multiple controllers: No 00:07:11.583 Associated with SR-IOV VF: No 00:07:11.583 Max Data Transfer Size: 524288 00:07:11.583 Max Number of Namespaces: 256 00:07:11.583 Max Number of I/O Queues: 64 00:07:11.583 NVMe Specification Version (VS): 1.4 00:07:11.583 NVMe Specification Version (Identify): 1.4 00:07:11.583 Maximum Queue Entries: 2048 00:07:11.583 Contiguous Queues Required: Yes 00:07:11.583 Arbitration Mechanisms Supported 00:07:11.583 Weighted Round Robin: Not Supported 00:07:11.583 Vendor Specific: Not Supported 00:07:11.583 Reset Timeout: 7500 ms 00:07:11.583 Doorbell Stride: 4 bytes 00:07:11.583 NVM Subsystem Reset: Not Supported 00:07:11.583 Command Sets Supported 00:07:11.583 NVM Command Set: Supported 00:07:11.583 Boot Partition: Not Supported 00:07:11.583 Memory Page Size Minimum: 4096 bytes 00:07:11.583 Memory Page Size Maximum: 65536 bytes 00:07:11.583 Persistent Memory Region: Not Supported 00:07:11.583 Optional Asynchronous Events Supported 00:07:11.583 Namespace Attribute Notices: Supported 00:07:11.583 Firmware Activation Notices: Not Supported 00:07:11.583 ANA Change Notices: Not Supported 00:07:11.583 PLE Aggregate Log Change Notices: Not Supported 00:07:11.583 LBA Status Info Alert Notices: Not Supported 00:07:11.583 EGE Aggregate Log Change Notices: Not Supported 00:07:11.583 Normal NVM Subsystem Shutdown event: Not Supported 00:07:11.583 Zone Descriptor Change Notices: Not Supported 00:07:11.583 Discovery Log Change Notices: Not Supported 00:07:11.583 Controller Attributes 00:07:11.583 128-bit Host Identifier: Not Supported 00:07:11.583 Non-Operational Permissive Mode: Not Supported 00:07:11.583 NVM Sets: Not Supported 00:07:11.583 Read Recovery Levels: Not Supported 00:07:11.583 Endurance Groups: Not Supported 00:07:11.583 Predictable Latency Mode: Not Supported 00:07:11.583 Traffic Based Keep ALive: Not Supported 00:07:11.583 Namespace Granularity: Not Supported 00:07:11.583 SQ Associations: Not Supported 00:07:11.583 UUID List: Not Supported 00:07:11.583 Multi-Domain Subsystem: Not Supported 00:07:11.583 Fixed Capacity Management: Not Supported 00:07:11.583 Variable Capacity Management: Not Supported 00:07:11.583 Delete Endurance Group: Not Supported 00:07:11.583 Delete NVM Set: Not Supported 00:07:11.583 Extended LBA Formats Supported: Supported 00:07:11.583 Flexible Data Placement Supported: Not Supported 00:07:11.583 00:07:11.584 Controller Memory Buffer Support 00:07:11.584 ================================ 00:07:11.584 Supported: No 00:07:11.584 00:07:11.584 Persistent Memory Region Support 00:07:11.584 ================================ 00:07:11.584 Supported: No 00:07:11.584 00:07:11.584 Admin Command Set Attributes 00:07:11.584 ============================ 00:07:11.584 Security Send/Receive: Not Supported 00:07:11.584 Format NVM: Supported 00:07:11.584 Firmware Activate/Download: Not Supported 00:07:11.584 Namespace Management: Supported 00:07:11.584 Device Self-Test: Not Supported 00:07:11.584 Directives: Supported 00:07:11.584 NVMe-MI: Not Supported 00:07:11.584 Virtualization Management: Not Supported 00:07:11.584 Doorbell Buffer Config: Supported 00:07:11.584 Get LBA Status Capability: Not Supported 00:07:11.584 Command & Feature Lockdown Capability: Not Supported 00:07:11.584 Abort Command Limit: 4 00:07:11.584 Async Event Request Limit: 4 00:07:11.584 Number of Firmware Slots: N/A 00:07:11.584 Firmware Slot 1 Read-Only: N/A 00:07:11.584 Firmware Activation Without Reset: N/A 00:07:11.584 Multiple Update Detection Support: N/A 00:07:11.584 Firmware Update Granularity: No Information Provided 00:07:11.584 Per-Namespace SMART Log: Yes 00:07:11.584 Asymmetric Namespace Access Log Page: Not Supported 00:07:11.584 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:11.584 Command Effects Log Page: Supported 00:07:11.584 Get Log Page Extended Data: Supported 00:07:11.584 Telemetry Log Pages: Not Supported 00:07:11.584 Persistent Event Log Pages: Not Supported 00:07:11.584 Supported Log Pages Log Page: May Support 00:07:11.584 Commands Supported & Effects Log Page: Not Supported 00:07:11.584 Feature Identifiers & Effects Log Page:May Support 00:07:11.584 NVMe-MI Commands & Effects Log Page: May Support 00:07:11.584 Data Area 4 for Telemetry Log: Not Supported 00:07:11.584 Error Log Page Entries Supported: 1 00:07:11.584 Keep Alive: Not Supported 00:07:11.584 00:07:11.584 NVM Command Set Attributes 00:07:11.584 ========================== 00:07:11.584 Submission Queue Entry Size 00:07:11.584 Max: 64 00:07:11.584 Min: 64 00:07:11.584 Completion Queue Entry Size 00:07:11.584 Max: 16 00:07:11.584 Min: 16 00:07:11.584 Number of Namespaces: 256 00:07:11.584 Compare Command: Supported 00:07:11.584 Write Uncorrectable Command: Not Supported 00:07:11.584 Dataset Management Command: Supported 00:07:11.584 Write Zeroes Command: Supported 00:07:11.584 Set Features Save Field: Supported 00:07:11.584 Reservations: Not Supported 00:07:11.584 Timestamp: Supported 00:07:11.584 Copy: Supported 00:07:11.584 Volatile Write Cache: Present 00:07:11.584 Atomic Write Unit (Normal): 1 00:07:11.584 Atomic Write Unit (PFail): 1 00:07:11.584 Atomic Compare & Write Unit: 1 00:07:11.584 Fused Compare & Write: Not Supported 00:07:11.584 Scatter-Gather List 00:07:11.584 SGL Command Set: Supported 00:07:11.584 SGL Keyed: Not Supported 00:07:11.584 SGL Bit Bucket Descriptor: Not Supported 00:07:11.584 SGL Metadata Pointer: Not Supported 00:07:11.584 Oversized SGL: Not Supported 00:07:11.584 SGL Metadata Address: Not Supported 00:07:11.584 SGL Offset: Not Supported 00:07:11.584 Transport SGL Data Block: Not Supported 00:07:11.584 Replay Protected Memory Block: Not Supported 00:07:11.584 00:07:11.584 Firmware Slot Information 00:07:11.584 ========================= 00:07:11.584 Active slot: 1 00:07:11.584 Slot 1 Firmware Revision: 1.0 00:07:11.584 00:07:11.584 00:07:11.584 Commands Supported and Effects 00:07:11.584 ============================== 00:07:11.584 Admin Commands 00:07:11.584 -------------- 00:07:11.584 Delete I/O Submission Queue (00h): Supported 00:07:11.584 Create I/O Submission Queue (01h): Supported 00:07:11.584 Get Log Page (02h): Supported 00:07:11.584 Delete I/O Completion Queue (04h): Supported 00:07:11.584 Create I/O Completion Queue (05h): Supported 00:07:11.584 Identify (06h): Supported 00:07:11.584 Abort (08h): Supported 00:07:11.584 Set Features (09h): Supported 00:07:11.584 Get Features (0Ah): Supported 00:07:11.584 Asynchronous Event Request (0Ch): Supported 00:07:11.584 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:11.584 Directive Send (19h): Supported 00:07:11.584 Directive Receive (1Ah): Supported 00:07:11.584 Virtualization Management (1Ch): Supported 00:07:11.584 Doorbell Buffer Config (7Ch): Supported 00:07:11.584 Format NVM (80h): Supported LBA-Change 00:07:11.584 I/O Commands 00:07:11.584 ------------ 00:07:11.584 Flush (00h): Supported LBA-Change 00:07:11.584 Write (01h): Supported LBA-Change 00:07:11.584 Read (02h): Supported 00:07:11.584 Compare (05h): Supported 00:07:11.584 Write Zeroes (08h): Supported LBA-Change 00:07:11.584 Dataset Management (09h): Supported LBA-Change 00:07:11.584 Unknown (0Ch): Supported 00:07:11.584 Unknown (12h): Supported 00:07:11.584 Copy (19h): Supported LBA-Change 00:07:11.584 Unknown (1Dh): Supported LBA-Change 00:07:11.584 00:07:11.584 Error Log 00:07:11.584 ========= 00:07:11.584 00:07:11.584 Arbitration 00:07:11.584 =========== 00:07:11.584 Arbitration Burst: no limit 00:07:11.584 00:07:11.584 Power Management 00:07:11.584 ================ 00:07:11.584 Number of Power States: 1 00:07:11.584 Current Power State: Power State #0 00:07:11.584 Power State #0: 00:07:11.584 Max Power: 25.00 W 00:07:11.584 Non-Operational State: Operational 00:07:11.584 Entry Latency: 16 microseconds 00:07:11.584 Exit Latency: 4 microseconds 00:07:11.584 Relative Read Throughput: 0 00:07:11.584 Relative Read Latency: 0 00:07:11.584 Relative Write Throughput: 0 00:07:11.584 Relative Write Latency: 0 00:07:11.584 Idle Power: Not Reported 00:07:11.584 Active Power: Not Reported 00:07:11.584 Non-Operational Permissive Mode: Not Supported 00:07:11.584 00:07:11.584 Health Information 00:07:11.584 ================== 00:07:11.584 Critical Warnings: 00:07:11.584 Available Spare Space: OK 00:07:11.584 Temperature: [2024-10-13 17:37:01.348952] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 63222 terminated unexpected 00:07:11.584 OK 00:07:11.584 Device Reliability: OK 00:07:11.584 Read Only: No 00:07:11.584 Volatile Memory Backup: OK 00:07:11.584 Current Temperature: 323 Kelvin (50 Celsius) 00:07:11.584 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:11.584 Available Spare: 0% 00:07:11.584 Available Spare Threshold: 0% 00:07:11.584 Life Percentage Used: 0% 00:07:11.584 Data Units Read: 994 00:07:11.584 Data Units Written: 867 00:07:11.584 Host Read Commands: 57813 00:07:11.584 Host Write Commands: 56701 00:07:11.584 Controller Busy Time: 0 minutes 00:07:11.584 Power Cycles: 0 00:07:11.584 Power On Hours: 0 hours 00:07:11.584 Unsafe Shutdowns: 0 00:07:11.584 Unrecoverable Media Errors: 0 00:07:11.584 Lifetime Error Log Entries: 0 00:07:11.584 Warning Temperature Time: 0 minutes 00:07:11.584 Critical Temperature Time: 0 minutes 00:07:11.584 00:07:11.584 Number of Queues 00:07:11.584 ================ 00:07:11.584 Number of I/O Submission Queues: 64 00:07:11.584 Number of I/O Completion Queues: 64 00:07:11.584 00:07:11.584 ZNS Specific Controller Data 00:07:11.584 ============================ 00:07:11.584 Zone Append Size Limit: 0 00:07:11.584 00:07:11.584 00:07:11.584 Active Namespaces 00:07:11.584 ================= 00:07:11.584 Namespace ID:1 00:07:11.584 Error Recovery Timeout: Unlimited 00:07:11.584 Command Set Identifier: NVM (00h) 00:07:11.584 Deallocate: Supported 00:07:11.584 Deallocated/Unwritten Error: Supported 00:07:11.584 Deallocated Read Value: All 0x00 00:07:11.584 Deallocate in Write Zeroes: Not Supported 00:07:11.584 Deallocated Guard Field: 0xFFFF 00:07:11.584 Flush: Supported 00:07:11.584 Reservation: Not Supported 00:07:11.584 Namespace Sharing Capabilities: Private 00:07:11.584 Size (in LBAs): 1310720 (5GiB) 00:07:11.584 Capacity (in LBAs): 1310720 (5GiB) 00:07:11.584 Utilization (in LBAs): 1310720 (5GiB) 00:07:11.584 Thin Provisioning: Not Supported 00:07:11.584 Per-NS Atomic Units: No 00:07:11.585 Maximum Single Source Range Length: 128 00:07:11.585 Maximum Copy Length: 128 00:07:11.585 Maximum Source Range Count: 128 00:07:11.585 NGUID/EUI64 Never Reused: No 00:07:11.585 Namespace Write Protected: No 00:07:11.585 Number of LBA Formats: 8 00:07:11.585 Current LBA Format: LBA Format #04 00:07:11.585 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:11.585 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:11.585 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:11.585 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:11.585 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:11.585 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:11.585 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:11.585 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:11.585 00:07:11.585 NVM Specific Namespace Data 00:07:11.585 =========================== 00:07:11.585 Logical Block Storage Tag Mask: 0 00:07:11.585 Protection Information Capabilities: 00:07:11.585 16b Guard Protection Information Storage Tag Support: No 00:07:11.585 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:11.585 Storage Tag Check Read Support: No 00:07:11.585 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.585 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.585 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.585 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.585 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.585 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.585 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.585 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.585 ===================================================== 00:07:11.585 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:11.585 ===================================================== 00:07:11.585 Controller Capabilities/Features 00:07:11.585 ================================ 00:07:11.585 Vendor ID: 1b36 00:07:11.585 Subsystem Vendor ID: 1af4 00:07:11.585 Serial Number: 12343 00:07:11.585 Model Number: QEMU NVMe Ctrl 00:07:11.585 Firmware Version: 8.0.0 00:07:11.585 Recommended Arb Burst: 6 00:07:11.585 IEEE OUI Identifier: 00 54 52 00:07:11.585 Multi-path I/O 00:07:11.585 May have multiple subsystem ports: No 00:07:11.585 May have multiple controllers: Yes 00:07:11.585 Associated with SR-IOV VF: No 00:07:11.585 Max Data Transfer Size: 524288 00:07:11.585 Max Number of Namespaces: 256 00:07:11.585 Max Number of I/O Queues: 64 00:07:11.585 NVMe Specification Version (VS): 1.4 00:07:11.585 NVMe Specification Version (Identify): 1.4 00:07:11.585 Maximum Queue Entries: 2048 00:07:11.585 Contiguous Queues Required: Yes 00:07:11.585 Arbitration Mechanisms Supported 00:07:11.585 Weighted Round Robin: Not Supported 00:07:11.585 Vendor Specific: Not Supported 00:07:11.585 Reset Timeout: 7500 ms 00:07:11.585 Doorbell Stride: 4 bytes 00:07:11.585 NVM Subsystem Reset: Not Supported 00:07:11.585 Command Sets Supported 00:07:11.585 NVM Command Set: Supported 00:07:11.585 Boot Partition: Not Supported 00:07:11.585 Memory Page Size Minimum: 4096 bytes 00:07:11.585 Memory Page Size Maximum: 65536 bytes 00:07:11.585 Persistent Memory Region: Not Supported 00:07:11.585 Optional Asynchronous Events Supported 00:07:11.585 Namespace Attribute Notices: Supported 00:07:11.585 Firmware Activation Notices: Not Supported 00:07:11.585 ANA Change Notices: Not Supported 00:07:11.585 PLE Aggregate Log Change Notices: Not Supported 00:07:11.585 LBA Status Info Alert Notices: Not Supported 00:07:11.585 EGE Aggregate Log Change Notices: Not Supported 00:07:11.585 Normal NVM Subsystem Shutdown event: Not Supported 00:07:11.585 Zone Descriptor Change Notices: Not Supported 00:07:11.585 Discovery Log Change Notices: Not Supported 00:07:11.585 Controller Attributes 00:07:11.585 128-bit Host Identifier: Not Supported 00:07:11.585 Non-Operational Permissive Mode: Not Supported 00:07:11.585 NVM Sets: Not Supported 00:07:11.585 Read Recovery Levels: Not Supported 00:07:11.585 Endurance Groups: Supported 00:07:11.585 Predictable Latency Mode: Not Supported 00:07:11.585 Traffic Based Keep ALive: Not Supported 00:07:11.585 Namespace Granularity: Not Supported 00:07:11.585 SQ Associations: Not Supported 00:07:11.585 UUID List: Not Supported 00:07:11.585 Multi-Domain Subsystem: Not Supported 00:07:11.585 Fixed Capacity Management: Not Supported 00:07:11.585 Variable Capacity Management: Not Supported 00:07:11.585 Delete Endurance Group: Not Supported 00:07:11.585 Delete NVM Set: Not Supported 00:07:11.585 Extended LBA Formats Supported: Supported 00:07:11.585 Flexible Data Placement Supported: Supported 00:07:11.585 00:07:11.585 Controller Memory Buffer Support 00:07:11.585 ================================ 00:07:11.585 Supported: No 00:07:11.585 00:07:11.585 Persistent Memory Region Support 00:07:11.585 ================================ 00:07:11.585 Supported: No 00:07:11.585 00:07:11.585 Admin Command Set Attributes 00:07:11.585 ============================ 00:07:11.585 Security Send/Receive: Not Supported 00:07:11.585 Format NVM: Supported 00:07:11.585 Firmware Activate/Download: Not Supported 00:07:11.585 Namespace Management: Supported 00:07:11.585 Device Self-Test: Not Supported 00:07:11.585 Directives: Supported 00:07:11.585 NVMe-MI: Not Supported 00:07:11.585 Virtualization Management: Not Supported 00:07:11.585 Doorbell Buffer Config: Supported 00:07:11.585 Get LBA Status Capability: Not Supported 00:07:11.585 Command & Feature Lockdown Capability: Not Supported 00:07:11.585 Abort Command Limit: 4 00:07:11.585 Async Event Request Limit: 4 00:07:11.585 Number of Firmware Slots: N/A 00:07:11.585 Firmware Slot 1 Read-Only: N/A 00:07:11.585 Firmware Activation Without Reset: N/A 00:07:11.585 Multiple Update Detection Support: N/A 00:07:11.585 Firmware Update Granularity: No Information Provided 00:07:11.585 Per-Namespace SMART Log: Yes 00:07:11.585 Asymmetric Namespace Access Log Page: Not Supported 00:07:11.585 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:11.585 Command Effects Log Page: Supported 00:07:11.585 Get Log Page Extended Data: Supported 00:07:11.585 Telemetry Log Pages: Not Supported 00:07:11.585 Persistent Event Log Pages: Not Supported 00:07:11.585 Supported Log Pages Log Page: May Support 00:07:11.585 Commands Supported & Effects Log Page: Not Supported 00:07:11.585 Feature Identifiers & Effects Log Page:May Support 00:07:11.585 NVMe-MI Commands & Effects Log Page: May Support 00:07:11.585 Data Area 4 for Telemetry Log: Not Supported 00:07:11.585 Error Log Page Entries Supported: 1 00:07:11.585 Keep Alive: Not Supported 00:07:11.585 00:07:11.585 NVM Command Set Attributes 00:07:11.585 ========================== 00:07:11.585 Submission Queue Entry Size 00:07:11.585 Max: 64 00:07:11.585 Min: 64 00:07:11.585 Completion Queue Entry Size 00:07:11.585 Max: 16 00:07:11.585 Min: 16 00:07:11.585 Number of Namespaces: 256 00:07:11.585 Compare Command: Supported 00:07:11.585 Write Uncorrectable Command: Not Supported 00:07:11.585 Dataset Management Command: Supported 00:07:11.585 Write Zeroes Command: Supported 00:07:11.585 Set Features Save Field: Supported 00:07:11.585 Reservations: Not Supported 00:07:11.585 Timestamp: Supported 00:07:11.585 Copy: Supported 00:07:11.585 Volatile Write Cache: Present 00:07:11.585 Atomic Write Unit (Normal): 1 00:07:11.585 Atomic Write Unit (PFail): 1 00:07:11.585 Atomic Compare & Write Unit: 1 00:07:11.585 Fused Compare & Write: Not Supported 00:07:11.585 Scatter-Gather List 00:07:11.585 SGL Command Set: Supported 00:07:11.585 SGL Keyed: Not Supported 00:07:11.585 SGL Bit Bucket Descriptor: Not Supported 00:07:11.585 SGL Metadata Pointer: Not Supported 00:07:11.585 Oversized SGL: Not Supported 00:07:11.585 SGL Metadata Address: Not Supported 00:07:11.585 SGL Offset: Not Supported 00:07:11.585 Transport SGL Data Block: Not Supported 00:07:11.585 Replay Protected Memory Block: Not Supported 00:07:11.585 00:07:11.585 Firmware Slot Information 00:07:11.585 ========================= 00:07:11.585 Active slot: 1 00:07:11.585 Slot 1 Firmware Revision: 1.0 00:07:11.585 00:07:11.585 00:07:11.585 Commands Supported and Effects 00:07:11.585 ============================== 00:07:11.585 Admin Commands 00:07:11.585 -------------- 00:07:11.585 Delete I/O Submission Queue (00h): Supported 00:07:11.585 Create I/O Submission Queue (01h): Supported 00:07:11.585 Get Log Page (02h): Supported 00:07:11.585 Delete I/O Completion Queue (04h): Supported 00:07:11.585 Create I/O Completion Queue (05h): Supported 00:07:11.585 Identify (06h): Supported 00:07:11.585 Abort (08h): Supported 00:07:11.585 Set Features (09h): Supported 00:07:11.585 Get Features (0Ah): Supported 00:07:11.585 Asynchronous Event Request (0Ch): Supported 00:07:11.585 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:11.585 Directive Send (19h): Supported 00:07:11.586 Directive Receive (1Ah): Supported 00:07:11.586 Virtualization Management (1Ch): Supported 00:07:11.586 Doorbell Buffer Config (7Ch): Supported 00:07:11.586 Format NVM (80h): Supported LBA-Change 00:07:11.586 I/O Commands 00:07:11.586 ------------ 00:07:11.586 Flush (00h): Supported LBA-Change 00:07:11.586 Write (01h): Supported LBA-Change 00:07:11.586 Read (02h): Supported 00:07:11.586 Compare (05h): Supported 00:07:11.586 Write Zeroes (08h): Supported LBA-Change 00:07:11.586 Dataset Management (09h): Supported LBA-Change 00:07:11.586 Unknown (0Ch): Supported 00:07:11.586 Unknown (12h): Supported 00:07:11.586 Copy (19h): Supported LBA-Change 00:07:11.586 Unknown (1Dh): Supported LBA-Change 00:07:11.586 00:07:11.586 Error Log 00:07:11.586 ========= 00:07:11.586 00:07:11.586 Arbitration 00:07:11.586 =========== 00:07:11.586 Arbitration Burst: no limit 00:07:11.586 00:07:11.586 Power Management 00:07:11.586 ================ 00:07:11.586 Number of Power States: 1 00:07:11.586 Current Power State: Power State #0 00:07:11.586 Power State #0: 00:07:11.586 Max Power: 25.00 W 00:07:11.586 Non-Operational State: Operational 00:07:11.586 Entry Latency: 16 microseconds 00:07:11.586 Exit Latency: 4 microseconds 00:07:11.586 Relative Read Throughput: 0 00:07:11.586 Relative Read Latency: 0 00:07:11.586 Relative Write Throughput: 0 00:07:11.586 Relative Write Latency: 0 00:07:11.586 Idle Power: Not Reported 00:07:11.586 Active Power: Not Reported 00:07:11.586 Non-Operational Permissive Mode: Not Supported 00:07:11.586 00:07:11.586 Health Information 00:07:11.586 ================== 00:07:11.586 Critical Warnings: 00:07:11.586 Available Spare Space: OK 00:07:11.586 Temperature: OK 00:07:11.586 Device Reliability: OK 00:07:11.586 Read Only: No 00:07:11.586 Volatile Memory Backup: OK 00:07:11.586 Current Temperature: 323 Kelvin (50 Celsius) 00:07:11.586 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:11.586 Available Spare: 0% 00:07:11.586 Available Spare Threshold: 0% 00:07:11.586 Life Percentage Used: 0% 00:07:11.586 Data Units Read: 1110 00:07:11.586 Data Units Written: 1040 00:07:11.586 Host Read Commands: 43073 00:07:11.586 Host Write Commands: 42496 00:07:11.586 Controller Busy Time: 0 minutes 00:07:11.586 Power Cycles: 0 00:07:11.586 Power On Hours: 0 hours 00:07:11.586 Unsafe Shutdowns: 0 00:07:11.586 Unrecoverable Media Errors: 0 00:07:11.586 Lifetime Error Log Entries: 0 00:07:11.586 Warning Temperature Time: 0 minutes 00:07:11.586 Critical Temperature Time: 0 minutes 00:07:11.586 00:07:11.586 Number of Queues 00:07:11.586 ================ 00:07:11.586 Number of I/O Submission Queues: 64 00:07:11.586 Number of I/O Completion Queues: 64 00:07:11.586 00:07:11.586 ZNS Specific Controller Data 00:07:11.586 ============================ 00:07:11.586 Zone Append Size Limit: 0 00:07:11.586 00:07:11.586 00:07:11.586 Active Namespaces 00:07:11.586 ================= 00:07:11.586 Namespace ID:1 00:07:11.586 Error Recovery Timeout: Unlimited 00:07:11.586 Command Set Identifier: NVM (00h) 00:07:11.586 Deallocate: Supported 00:07:11.586 Deallocated/Unwritten Error: Supported 00:07:11.586 Deallocated Read Value: All 0x00 00:07:11.586 Deallocate in Write Zeroes: Not Supported 00:07:11.586 Deallocated Guard Field: 0xFFFF 00:07:11.586 Flush: Supported 00:07:11.586 Reservation: Not Supported 00:07:11.586 Namespace Sharing Capabilities: Multiple Controllers 00:07:11.586 Size (in LBAs): 262144 (1GiB) 00:07:11.586 Capacity (in LBAs): 262144 (1GiB) 00:07:11.586 Utilization (in LBAs): 262144 (1GiB) 00:07:11.586 Thin Provisioning: Not Supported 00:07:11.586 Per-NS Atomic Units: No 00:07:11.586 Maximum Single Source Range Length: 128 00:07:11.586 Maximum Copy Length: 128 00:07:11.586 Maximum Source Range Count: 128 00:07:11.586 NGUID/EUI64 Never Reused: No 00:07:11.586 Namespace Write Protected: No 00:07:11.586 Endurance group ID: 1 00:07:11.586 Number of LBA Formats: 8 00:07:11.586 Current LBA Format: LBA Format #04 00:07:11.586 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:11.586 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:11.586 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:11.586 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:11.586 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:11.586 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:11.586 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:11.586 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:11.586 00:07:11.586 Get Feature FDP: 00:07:11.586 ================ 00:07:11.586 Enabled: Yes 00:07:11.586 FDP configuration index: 0 00:07:11.586 00:07:11.586 FDP configurations log page 00:07:11.586 =========================== 00:07:11.586 Number of FDP configurations: 1 00:07:11.586 Version: 0 00:07:11.586 Size: 112 00:07:11.586 FDP Configuration Descriptor: 0 00:07:11.586 Descriptor Size: 96 00:07:11.586 Reclaim Group Identifier format: 2 00:07:11.586 FDP Volatile Write Cache: Not Present 00:07:11.586 FDP Configuration: Valid 00:07:11.586 Vendor Specific Size: 0 00:07:11.586 Number of Reclaim Groups: 2 00:07:11.586 Number of Recalim Unit Handles: 8 00:07:11.586 Max Placement Identifiers: 128 00:07:11.586 Number of Namespaces Suppprted: 256 00:07:11.586 Reclaim unit Nominal Size: 6000000 bytes 00:07:11.586 Estimated Reclaim Unit Time Limit: Not Reported 00:07:11.586 RUH Desc #000: RUH Type: Initially Isolated 00:07:11.586 RUH Desc #001: RUH Type: Initially Isolated 00:07:11.586 RUH Desc #002: RUH Type: Initially Isolated 00:07:11.586 RUH Desc #003: RUH Type: Initially Isolated 00:07:11.586 RUH Desc #004: RUH Type: Initially Isolated 00:07:11.586 RUH Desc #005: RUH Type: Initially Isolated 00:07:11.586 RUH Desc #006: RUH Type: Initially Isolated 00:07:11.586 RUH Desc #007: RUH Type: Initially Isolated 00:07:11.586 00:07:11.586 FDP reclaim unit handle usage log page 00:07:11.586 ====================================== 00:07:11.586 Number of Reclaim Unit Handles: 8 00:07:11.586 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:11.586 RUH Usage Desc #001: RUH Attributes: Unused 00:07:11.586 RUH Usage Desc #002: RUH Attributes: Unused 00:07:11.586 RUH Usage Desc #003: RUH Attributes: Unused 00:07:11.586 RUH Usage Desc #004: RUH Attributes: Unused 00:07:11.586 RUH Usage Desc #005: RUH Attributes: Unused 00:07:11.586 RUH Usage Desc #006: RUH Attributes: Unused 00:07:11.586 RUH Usage Desc #007: RUH Attributes: Unused 00:07:11.586 00:07:11.586 FDP statistics log page 00:07:11.586 ======================= 00:07:11.586 Host bytes with metadata written: 644718592 00:07:11.586 Med[2024-10-13 17:37:01.350087] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 63222 terminated unexpected 00:07:11.586 ia bytes with metadata written: 644919296 00:07:11.586 Media bytes erased: 0 00:07:11.586 00:07:11.586 FDP events log page 00:07:11.586 =================== 00:07:11.586 Number of FDP events: 0 00:07:11.586 00:07:11.586 NVM Specific Namespace Data 00:07:11.586 =========================== 00:07:11.586 Logical Block Storage Tag Mask: 0 00:07:11.586 Protection Information Capabilities: 00:07:11.586 16b Guard Protection Information Storage Tag Support: No 00:07:11.586 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:11.586 Storage Tag Check Read Support: No 00:07:11.586 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.586 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.586 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.586 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.586 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.586 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.586 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.586 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.586 ===================================================== 00:07:11.586 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:11.586 ===================================================== 00:07:11.586 Controller Capabilities/Features 00:07:11.586 ================================ 00:07:11.586 Vendor ID: 1b36 00:07:11.586 Subsystem Vendor ID: 1af4 00:07:11.586 Serial Number: 12342 00:07:11.586 Model Number: QEMU NVMe Ctrl 00:07:11.586 Firmware Version: 8.0.0 00:07:11.586 Recommended Arb Burst: 6 00:07:11.586 IEEE OUI Identifier: 00 54 52 00:07:11.586 Multi-path I/O 00:07:11.586 May have multiple subsystem ports: No 00:07:11.586 May have multiple controllers: No 00:07:11.586 Associated with SR-IOV VF: No 00:07:11.586 Max Data Transfer Size: 524288 00:07:11.586 Max Number of Namespaces: 256 00:07:11.586 Max Number of I/O Queues: 64 00:07:11.586 NVMe Specification Version (VS): 1.4 00:07:11.586 NVMe Specification Version (Identify): 1.4 00:07:11.586 Maximum Queue Entries: 2048 00:07:11.587 Contiguous Queues Required: Yes 00:07:11.587 Arbitration Mechanisms Supported 00:07:11.587 Weighted Round Robin: Not Supported 00:07:11.587 Vendor Specific: Not Supported 00:07:11.587 Reset Timeout: 7500 ms 00:07:11.587 Doorbell Stride: 4 bytes 00:07:11.587 NVM Subsystem Reset: Not Supported 00:07:11.587 Command Sets Supported 00:07:11.587 NVM Command Set: Supported 00:07:11.587 Boot Partition: Not Supported 00:07:11.587 Memory Page Size Minimum: 4096 bytes 00:07:11.587 Memory Page Size Maximum: 65536 bytes 00:07:11.587 Persistent Memory Region: Not Supported 00:07:11.587 Optional Asynchronous Events Supported 00:07:11.587 Namespace Attribute Notices: Supported 00:07:11.587 Firmware Activation Notices: Not Supported 00:07:11.587 ANA Change Notices: Not Supported 00:07:11.587 PLE Aggregate Log Change Notices: Not Supported 00:07:11.587 LBA Status Info Alert Notices: Not Supported 00:07:11.587 EGE Aggregate Log Change Notices: Not Supported 00:07:11.587 Normal NVM Subsystem Shutdown event: Not Supported 00:07:11.587 Zone Descriptor Change Notices: Not Supported 00:07:11.587 Discovery Log Change Notices: Not Supported 00:07:11.587 Controller Attributes 00:07:11.587 128-bit Host Identifier: Not Supported 00:07:11.587 Non-Operational Permissive Mode: Not Supported 00:07:11.587 NVM Sets: Not Supported 00:07:11.587 Read Recovery Levels: Not Supported 00:07:11.587 Endurance Groups: Not Supported 00:07:11.587 Predictable Latency Mode: Not Supported 00:07:11.587 Traffic Based Keep ALive: Not Supported 00:07:11.587 Namespace Granularity: Not Supported 00:07:11.587 SQ Associations: Not Supported 00:07:11.587 UUID List: Not Supported 00:07:11.587 Multi-Domain Subsystem: Not Supported 00:07:11.587 Fixed Capacity Management: Not Supported 00:07:11.587 Variable Capacity Management: Not Supported 00:07:11.587 Delete Endurance Group: Not Supported 00:07:11.587 Delete NVM Set: Not Supported 00:07:11.587 Extended LBA Formats Supported: Supported 00:07:11.587 Flexible Data Placement Supported: Not Supported 00:07:11.587 00:07:11.587 Controller Memory Buffer Support 00:07:11.587 ================================ 00:07:11.587 Supported: No 00:07:11.587 00:07:11.587 Persistent Memory Region Support 00:07:11.587 ================================ 00:07:11.587 Supported: No 00:07:11.587 00:07:11.587 Admin Command Set Attributes 00:07:11.587 ============================ 00:07:11.587 Security Send/Receive: Not Supported 00:07:11.587 Format NVM: Supported 00:07:11.587 Firmware Activate/Download: Not Supported 00:07:11.587 Namespace Management: Supported 00:07:11.587 Device Self-Test: Not Supported 00:07:11.587 Directives: Supported 00:07:11.587 NVMe-MI: Not Supported 00:07:11.587 Virtualization Management: Not Supported 00:07:11.587 Doorbell Buffer Config: Supported 00:07:11.587 Get LBA Status Capability: Not Supported 00:07:11.587 Command & Feature Lockdown Capability: Not Supported 00:07:11.587 Abort Command Limit: 4 00:07:11.587 Async Event Request Limit: 4 00:07:11.587 Number of Firmware Slots: N/A 00:07:11.587 Firmware Slot 1 Read-Only: N/A 00:07:11.587 Firmware Activation Without Reset: N/A 00:07:11.587 Multiple Update Detection Support: N/A 00:07:11.587 Firmware Update Granularity: No Information Provided 00:07:11.587 Per-Namespace SMART Log: Yes 00:07:11.587 Asymmetric Namespace Access Log Page: Not Supported 00:07:11.587 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:11.587 Command Effects Log Page: Supported 00:07:11.587 Get Log Page Extended Data: Supported 00:07:11.587 Telemetry Log Pages: Not Supported 00:07:11.587 Persistent Event Log Pages: Not Supported 00:07:11.587 Supported Log Pages Log Page: May Support 00:07:11.587 Commands Supported & Effects Log Page: Not Supported 00:07:11.587 Feature Identifiers & Effects Log Page:May Support 00:07:11.587 NVMe-MI Commands & Effects Log Page: May Support 00:07:11.587 Data Area 4 for Telemetry Log: Not Supported 00:07:11.587 Error Log Page Entries Supported: 1 00:07:11.587 Keep Alive: Not Supported 00:07:11.587 00:07:11.587 NVM Command Set Attributes 00:07:11.587 ========================== 00:07:11.587 Submission Queue Entry Size 00:07:11.587 Max: 64 00:07:11.587 Min: 64 00:07:11.587 Completion Queue Entry Size 00:07:11.587 Max: 16 00:07:11.587 Min: 16 00:07:11.587 Number of Namespaces: 256 00:07:11.587 Compare Command: Supported 00:07:11.587 Write Uncorrectable Command: Not Supported 00:07:11.587 Dataset Management Command: Supported 00:07:11.587 Write Zeroes Command: Supported 00:07:11.587 Set Features Save Field: Supported 00:07:11.587 Reservations: Not Supported 00:07:11.587 Timestamp: Supported 00:07:11.587 Copy: Supported 00:07:11.587 Volatile Write Cache: Present 00:07:11.587 Atomic Write Unit (Normal): 1 00:07:11.587 Atomic Write Unit (PFail): 1 00:07:11.587 Atomic Compare & Write Unit: 1 00:07:11.587 Fused Compare & Write: Not Supported 00:07:11.587 Scatter-Gather List 00:07:11.587 SGL Command Set: Supported 00:07:11.587 SGL Keyed: Not Supported 00:07:11.587 SGL Bit Bucket Descriptor: Not Supported 00:07:11.587 SGL Metadata Pointer: Not Supported 00:07:11.587 Oversized SGL: Not Supported 00:07:11.587 SGL Metadata Address: Not Supported 00:07:11.587 SGL Offset: Not Supported 00:07:11.587 Transport SGL Data Block: Not Supported 00:07:11.587 Replay Protected Memory Block: Not Supported 00:07:11.587 00:07:11.587 Firmware Slot Information 00:07:11.587 ========================= 00:07:11.587 Active slot: 1 00:07:11.587 Slot 1 Firmware Revision: 1.0 00:07:11.587 00:07:11.587 00:07:11.587 Commands Supported and Effects 00:07:11.587 ============================== 00:07:11.587 Admin Commands 00:07:11.587 -------------- 00:07:11.587 Delete I/O Submission Queue (00h): Supported 00:07:11.587 Create I/O Submission Queue (01h): Supported 00:07:11.587 Get Log Page (02h): Supported 00:07:11.587 Delete I/O Completion Queue (04h): Supported 00:07:11.587 Create I/O Completion Queue (05h): Supported 00:07:11.587 Identify (06h): Supported 00:07:11.587 Abort (08h): Supported 00:07:11.587 Set Features (09h): Supported 00:07:11.587 Get Features (0Ah): Supported 00:07:11.587 Asynchronous Event Request (0Ch): Supported 00:07:11.587 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:11.587 Directive Send (19h): Supported 00:07:11.587 Directive Receive (1Ah): Supported 00:07:11.587 Virtualization Management (1Ch): Supported 00:07:11.587 Doorbell Buffer Config (7Ch): Supported 00:07:11.587 Format NVM (80h): Supported LBA-Change 00:07:11.587 I/O Commands 00:07:11.587 ------------ 00:07:11.587 Flush (00h): Supported LBA-Change 00:07:11.587 Write (01h): Supported LBA-Change 00:07:11.587 Read (02h): Supported 00:07:11.587 Compare (05h): Supported 00:07:11.587 Write Zeroes (08h): Supported LBA-Change 00:07:11.587 Dataset Management (09h): Supported LBA-Change 00:07:11.587 Unknown (0Ch): Supported 00:07:11.587 Unknown (12h): Supported 00:07:11.587 Copy (19h): Supported LBA-Change 00:07:11.587 Unknown (1Dh): Supported LBA-Change 00:07:11.587 00:07:11.588 Error Log 00:07:11.588 ========= 00:07:11.588 00:07:11.588 Arbitration 00:07:11.588 =========== 00:07:11.588 Arbitration Burst: no limit 00:07:11.588 00:07:11.588 Power Management 00:07:11.588 ================ 00:07:11.588 Number of Power States: 1 00:07:11.588 Current Power State: Power State #0 00:07:11.588 Power State #0: 00:07:11.588 Max Power: 25.00 W 00:07:11.588 Non-Operational State: Operational 00:07:11.588 Entry Latency: 16 microseconds 00:07:11.588 Exit Latency: 4 microseconds 00:07:11.588 Relative Read Throughput: 0 00:07:11.588 Relative Read Latency: 0 00:07:11.588 Relative Write Throughput: 0 00:07:11.588 Relative Write Latency: 0 00:07:11.588 Idle Power: Not Reported 00:07:11.588 Active Power: Not Reported 00:07:11.588 Non-Operational Permissive Mode: Not Supported 00:07:11.588 00:07:11.588 Health Information 00:07:11.588 ================== 00:07:11.588 Critical Warnings: 00:07:11.588 Available Spare Space: OK 00:07:11.588 Temperature: OK 00:07:11.588 Device Reliability: OK 00:07:11.588 Read Only: No 00:07:11.588 Volatile Memory Backup: OK 00:07:11.588 Current Temperature: 323 Kelvin (50 Celsius) 00:07:11.588 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:11.588 Available Spare: 0% 00:07:11.588 Available Spare Threshold: 0% 00:07:11.588 Life Percentage Used: 0% 00:07:11.588 Data Units Read: 2287 00:07:11.588 Data Units Written: 2074 00:07:11.588 Host Read Commands: 120889 00:07:11.588 Host Write Commands: 119158 00:07:11.588 Controller Busy Time: 0 minutes 00:07:11.588 Power Cycles: 0 00:07:11.588 Power On Hours: 0 hours 00:07:11.588 Unsafe Shutdowns: 0 00:07:11.588 Unrecoverable Media Errors: 0 00:07:11.588 Lifetime Error Log Entries: 0 00:07:11.588 Warning Temperature Time: 0 minutes 00:07:11.588 Critical Temperature Time: 0 minutes 00:07:11.588 00:07:11.588 Number of Queues 00:07:11.588 ================ 00:07:11.588 Number of I/O Submission Queues: 64 00:07:11.588 Number of I/O Completion Queues: 64 00:07:11.588 00:07:11.588 ZNS Specific Controller Data 00:07:11.588 ============================ 00:07:11.588 Zone Append Size Limit: 0 00:07:11.588 00:07:11.588 00:07:11.588 Active Namespaces 00:07:11.588 ================= 00:07:11.588 Namespace ID:1 00:07:11.588 Error Recovery Timeout: Unlimited 00:07:11.588 Command Set Identifier: NVM (00h) 00:07:11.588 Deallocate: Supported 00:07:11.588 Deallocated/Unwritten Error: Supported 00:07:11.588 Deallocated Read Value: All 0x00 00:07:11.588 Deallocate in Write Zeroes: Not Supported 00:07:11.588 Deallocated Guard Field: 0xFFFF 00:07:11.588 Flush: Supported 00:07:11.588 Reservation: Not Supported 00:07:11.588 Namespace Sharing Capabilities: Private 00:07:11.588 Size (in LBAs): 1048576 (4GiB) 00:07:11.588 Capacity (in LBAs): 1048576 (4GiB) 00:07:11.588 Utilization (in LBAs): 1048576 (4GiB) 00:07:11.588 Thin Provisioning: Not Supported 00:07:11.588 Per-NS Atomic Units: No 00:07:11.588 Maximum Single Source Range Length: 128 00:07:11.588 Maximum Copy Length: 128 00:07:11.588 Maximum Source Range Count: 128 00:07:11.588 NGUID/EUI64 Never Reused: No 00:07:11.588 Namespace Write Protected: No 00:07:11.588 Number of LBA Formats: 8 00:07:11.588 Current LBA Format: LBA Format #04 00:07:11.588 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:11.588 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:11.588 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:11.588 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:11.588 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:11.588 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:11.588 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:11.588 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:11.588 00:07:11.588 NVM Specific Namespace Data 00:07:11.588 =========================== 00:07:11.588 Logical Block Storage Tag Mask: 0 00:07:11.588 Protection Information Capabilities: 00:07:11.588 16b Guard Protection Information Storage Tag Support: No 00:07:11.588 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:11.588 Storage Tag Check Read Support: No 00:07:11.588 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.588 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.588 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.588 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.588 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.588 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.588 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.588 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.588 Namespace ID:2 00:07:11.588 Error Recovery Timeout: Unlimited 00:07:11.588 Command Set Identifier: NVM (00h) 00:07:11.588 Deallocate: Supported 00:07:11.588 Deallocated/Unwritten Error: Supported 00:07:11.588 Deallocated Read Value: All 0x00 00:07:11.588 Deallocate in Write Zeroes: Not Supported 00:07:11.588 Deallocated Guard Field: 0xFFFF 00:07:11.588 Flush: Supported 00:07:11.588 Reservation: Not Supported 00:07:11.588 Namespace Sharing Capabilities: Private 00:07:11.588 Size (in LBAs): 1048576 (4GiB) 00:07:11.588 Capacity (in LBAs): 1048576 (4GiB) 00:07:11.588 Utilization (in LBAs): 1048576 (4GiB) 00:07:11.588 Thin Provisioning: Not Supported 00:07:11.588 Per-NS Atomic Units: No 00:07:11.588 Maximum Single Source Range Length: 128 00:07:11.588 Maximum Copy Length: 128 00:07:11.588 Maximum Source Range Count: 128 00:07:11.588 NGUID/EUI64 Never Reused: No 00:07:11.588 Namespace Write Protected: No 00:07:11.588 Number of LBA Formats: 8 00:07:11.588 Current LBA Format: LBA Format #04 00:07:11.588 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:11.588 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:11.588 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:11.588 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:11.588 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:11.588 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:11.588 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:11.588 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:11.588 00:07:11.588 NVM Specific Namespace Data 00:07:11.588 =========================== 00:07:11.588 Logical Block Storage Tag Mask: 0 00:07:11.588 Protection Information Capabilities: 00:07:11.588 16b Guard Protection Information Storage Tag Support: No 00:07:11.588 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:11.588 Storage Tag Check Read Support: No 00:07:11.588 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.588 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.588 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.588 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.588 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.588 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.588 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.588 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.588 Namespace ID:3 00:07:11.588 Error Recovery Timeout: Unlimited 00:07:11.588 Command Set Identifier: NVM (00h) 00:07:11.588 Deallocate: Supported 00:07:11.588 Deallocated/Unwritten Error: Supported 00:07:11.588 Deallocated Read Value: All 0x00 00:07:11.588 Deallocate in Write Zeroes: Not Supported 00:07:11.588 Deallocated Guard Field: 0xFFFF 00:07:11.588 Flush: Supported 00:07:11.588 Reservation: Not Supported 00:07:11.588 Namespace Sharing Capabilities: Private 00:07:11.588 Size (in LBAs): 1048576 (4GiB) 00:07:11.588 Capacity (in LBAs): 1048576 (4GiB) 00:07:11.588 Utilization (in LBAs): 1048576 (4GiB) 00:07:11.588 Thin Provisioning: Not Supported 00:07:11.588 Per-NS Atomic Units: No 00:07:11.588 Maximum Single Source Range Length: 128 00:07:11.588 Maximum Copy Length: 128 00:07:11.588 Maximum Source Range Count: 128 00:07:11.588 NGUID/EUI64 Never Reused: No 00:07:11.588 Namespace Write Protected: No 00:07:11.588 Number of LBA Formats: 8 00:07:11.588 Current LBA Format: LBA Format #04 00:07:11.588 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:11.588 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:11.588 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:11.588 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:11.588 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:11.588 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:11.588 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:11.588 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:11.588 00:07:11.588 NVM Specific Namespace Data 00:07:11.588 =========================== 00:07:11.588 Logical Block Storage Tag Mask: 0 00:07:11.588 Protection Information Capabilities: 00:07:11.588 16b Guard Protection Information Storage Tag Support: No 00:07:11.588 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:11.588 Storage Tag Check Read Support: No 00:07:11.589 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.589 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.589 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.589 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.589 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.589 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.589 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.589 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.589 17:37:01 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:11.589 17:37:01 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:11.847 ===================================================== 00:07:11.847 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:11.847 ===================================================== 00:07:11.847 Controller Capabilities/Features 00:07:11.847 ================================ 00:07:11.847 Vendor ID: 1b36 00:07:11.847 Subsystem Vendor ID: 1af4 00:07:11.847 Serial Number: 12340 00:07:11.847 Model Number: QEMU NVMe Ctrl 00:07:11.847 Firmware Version: 8.0.0 00:07:11.847 Recommended Arb Burst: 6 00:07:11.847 IEEE OUI Identifier: 00 54 52 00:07:11.847 Multi-path I/O 00:07:11.847 May have multiple subsystem ports: No 00:07:11.847 May have multiple controllers: No 00:07:11.847 Associated with SR-IOV VF: No 00:07:11.847 Max Data Transfer Size: 524288 00:07:11.847 Max Number of Namespaces: 256 00:07:11.847 Max Number of I/O Queues: 64 00:07:11.847 NVMe Specification Version (VS): 1.4 00:07:11.847 NVMe Specification Version (Identify): 1.4 00:07:11.848 Maximum Queue Entries: 2048 00:07:11.848 Contiguous Queues Required: Yes 00:07:11.848 Arbitration Mechanisms Supported 00:07:11.848 Weighted Round Robin: Not Supported 00:07:11.848 Vendor Specific: Not Supported 00:07:11.848 Reset Timeout: 7500 ms 00:07:11.848 Doorbell Stride: 4 bytes 00:07:11.848 NVM Subsystem Reset: Not Supported 00:07:11.848 Command Sets Supported 00:07:11.848 NVM Command Set: Supported 00:07:11.848 Boot Partition: Not Supported 00:07:11.848 Memory Page Size Minimum: 4096 bytes 00:07:11.848 Memory Page Size Maximum: 65536 bytes 00:07:11.848 Persistent Memory Region: Not Supported 00:07:11.848 Optional Asynchronous Events Supported 00:07:11.848 Namespace Attribute Notices: Supported 00:07:11.848 Firmware Activation Notices: Not Supported 00:07:11.848 ANA Change Notices: Not Supported 00:07:11.848 PLE Aggregate Log Change Notices: Not Supported 00:07:11.848 LBA Status Info Alert Notices: Not Supported 00:07:11.848 EGE Aggregate Log Change Notices: Not Supported 00:07:11.848 Normal NVM Subsystem Shutdown event: Not Supported 00:07:11.848 Zone Descriptor Change Notices: Not Supported 00:07:11.848 Discovery Log Change Notices: Not Supported 00:07:11.848 Controller Attributes 00:07:11.848 128-bit Host Identifier: Not Supported 00:07:11.848 Non-Operational Permissive Mode: Not Supported 00:07:11.848 NVM Sets: Not Supported 00:07:11.848 Read Recovery Levels: Not Supported 00:07:11.848 Endurance Groups: Not Supported 00:07:11.848 Predictable Latency Mode: Not Supported 00:07:11.848 Traffic Based Keep ALive: Not Supported 00:07:11.848 Namespace Granularity: Not Supported 00:07:11.848 SQ Associations: Not Supported 00:07:11.848 UUID List: Not Supported 00:07:11.848 Multi-Domain Subsystem: Not Supported 00:07:11.848 Fixed Capacity Management: Not Supported 00:07:11.848 Variable Capacity Management: Not Supported 00:07:11.848 Delete Endurance Group: Not Supported 00:07:11.848 Delete NVM Set: Not Supported 00:07:11.848 Extended LBA Formats Supported: Supported 00:07:11.848 Flexible Data Placement Supported: Not Supported 00:07:11.848 00:07:11.848 Controller Memory Buffer Support 00:07:11.848 ================================ 00:07:11.848 Supported: No 00:07:11.848 00:07:11.848 Persistent Memory Region Support 00:07:11.848 ================================ 00:07:11.848 Supported: No 00:07:11.848 00:07:11.848 Admin Command Set Attributes 00:07:11.848 ============================ 00:07:11.848 Security Send/Receive: Not Supported 00:07:11.848 Format NVM: Supported 00:07:11.848 Firmware Activate/Download: Not Supported 00:07:11.848 Namespace Management: Supported 00:07:11.848 Device Self-Test: Not Supported 00:07:11.848 Directives: Supported 00:07:11.848 NVMe-MI: Not Supported 00:07:11.848 Virtualization Management: Not Supported 00:07:11.848 Doorbell Buffer Config: Supported 00:07:11.848 Get LBA Status Capability: Not Supported 00:07:11.848 Command & Feature Lockdown Capability: Not Supported 00:07:11.848 Abort Command Limit: 4 00:07:11.848 Async Event Request Limit: 4 00:07:11.848 Number of Firmware Slots: N/A 00:07:11.848 Firmware Slot 1 Read-Only: N/A 00:07:11.848 Firmware Activation Without Reset: N/A 00:07:11.848 Multiple Update Detection Support: N/A 00:07:11.848 Firmware Update Granularity: No Information Provided 00:07:11.848 Per-Namespace SMART Log: Yes 00:07:11.848 Asymmetric Namespace Access Log Page: Not Supported 00:07:11.848 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:11.848 Command Effects Log Page: Supported 00:07:11.848 Get Log Page Extended Data: Supported 00:07:11.848 Telemetry Log Pages: Not Supported 00:07:11.848 Persistent Event Log Pages: Not Supported 00:07:11.848 Supported Log Pages Log Page: May Support 00:07:11.848 Commands Supported & Effects Log Page: Not Supported 00:07:11.848 Feature Identifiers & Effects Log Page:May Support 00:07:11.848 NVMe-MI Commands & Effects Log Page: May Support 00:07:11.848 Data Area 4 for Telemetry Log: Not Supported 00:07:11.848 Error Log Page Entries Supported: 1 00:07:11.848 Keep Alive: Not Supported 00:07:11.848 00:07:11.848 NVM Command Set Attributes 00:07:11.848 ========================== 00:07:11.848 Submission Queue Entry Size 00:07:11.848 Max: 64 00:07:11.848 Min: 64 00:07:11.848 Completion Queue Entry Size 00:07:11.848 Max: 16 00:07:11.848 Min: 16 00:07:11.848 Number of Namespaces: 256 00:07:11.848 Compare Command: Supported 00:07:11.848 Write Uncorrectable Command: Not Supported 00:07:11.848 Dataset Management Command: Supported 00:07:11.848 Write Zeroes Command: Supported 00:07:11.848 Set Features Save Field: Supported 00:07:11.848 Reservations: Not Supported 00:07:11.848 Timestamp: Supported 00:07:11.848 Copy: Supported 00:07:11.848 Volatile Write Cache: Present 00:07:11.848 Atomic Write Unit (Normal): 1 00:07:11.848 Atomic Write Unit (PFail): 1 00:07:11.848 Atomic Compare & Write Unit: 1 00:07:11.848 Fused Compare & Write: Not Supported 00:07:11.848 Scatter-Gather List 00:07:11.848 SGL Command Set: Supported 00:07:11.848 SGL Keyed: Not Supported 00:07:11.848 SGL Bit Bucket Descriptor: Not Supported 00:07:11.848 SGL Metadata Pointer: Not Supported 00:07:11.848 Oversized SGL: Not Supported 00:07:11.848 SGL Metadata Address: Not Supported 00:07:11.848 SGL Offset: Not Supported 00:07:11.848 Transport SGL Data Block: Not Supported 00:07:11.848 Replay Protected Memory Block: Not Supported 00:07:11.848 00:07:11.848 Firmware Slot Information 00:07:11.848 ========================= 00:07:11.848 Active slot: 1 00:07:11.848 Slot 1 Firmware Revision: 1.0 00:07:11.848 00:07:11.848 00:07:11.848 Commands Supported and Effects 00:07:11.848 ============================== 00:07:11.848 Admin Commands 00:07:11.848 -------------- 00:07:11.848 Delete I/O Submission Queue (00h): Supported 00:07:11.848 Create I/O Submission Queue (01h): Supported 00:07:11.848 Get Log Page (02h): Supported 00:07:11.848 Delete I/O Completion Queue (04h): Supported 00:07:11.848 Create I/O Completion Queue (05h): Supported 00:07:11.848 Identify (06h): Supported 00:07:11.848 Abort (08h): Supported 00:07:11.848 Set Features (09h): Supported 00:07:11.848 Get Features (0Ah): Supported 00:07:11.848 Asynchronous Event Request (0Ch): Supported 00:07:11.848 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:11.848 Directive Send (19h): Supported 00:07:11.848 Directive Receive (1Ah): Supported 00:07:11.848 Virtualization Management (1Ch): Supported 00:07:11.848 Doorbell Buffer Config (7Ch): Supported 00:07:11.848 Format NVM (80h): Supported LBA-Change 00:07:11.848 I/O Commands 00:07:11.848 ------------ 00:07:11.848 Flush (00h): Supported LBA-Change 00:07:11.848 Write (01h): Supported LBA-Change 00:07:11.848 Read (02h): Supported 00:07:11.848 Compare (05h): Supported 00:07:11.848 Write Zeroes (08h): Supported LBA-Change 00:07:11.848 Dataset Management (09h): Supported LBA-Change 00:07:11.848 Unknown (0Ch): Supported 00:07:11.848 Unknown (12h): Supported 00:07:11.848 Copy (19h): Supported LBA-Change 00:07:11.848 Unknown (1Dh): Supported LBA-Change 00:07:11.848 00:07:11.848 Error Log 00:07:11.848 ========= 00:07:11.848 00:07:11.848 Arbitration 00:07:11.848 =========== 00:07:11.848 Arbitration Burst: no limit 00:07:11.848 00:07:11.848 Power Management 00:07:11.848 ================ 00:07:11.848 Number of Power States: 1 00:07:11.848 Current Power State: Power State #0 00:07:11.848 Power State #0: 00:07:11.848 Max Power: 25.00 W 00:07:11.848 Non-Operational State: Operational 00:07:11.848 Entry Latency: 16 microseconds 00:07:11.848 Exit Latency: 4 microseconds 00:07:11.848 Relative Read Throughput: 0 00:07:11.848 Relative Read Latency: 0 00:07:11.848 Relative Write Throughput: 0 00:07:11.848 Relative Write Latency: 0 00:07:11.848 Idle Power: Not Reported 00:07:11.848 Active Power: Not Reported 00:07:11.848 Non-Operational Permissive Mode: Not Supported 00:07:11.848 00:07:11.848 Health Information 00:07:11.848 ================== 00:07:11.848 Critical Warnings: 00:07:11.848 Available Spare Space: OK 00:07:11.848 Temperature: OK 00:07:11.849 Device Reliability: OK 00:07:11.849 Read Only: No 00:07:11.849 Volatile Memory Backup: OK 00:07:11.849 Current Temperature: 323 Kelvin (50 Celsius) 00:07:11.849 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:11.849 Available Spare: 0% 00:07:11.849 Available Spare Threshold: 0% 00:07:11.849 Life Percentage Used: 0% 00:07:11.849 Data Units Read: 669 00:07:11.849 Data Units Written: 597 00:07:11.849 Host Read Commands: 39263 00:07:11.849 Host Write Commands: 39049 00:07:11.849 Controller Busy Time: 0 minutes 00:07:11.849 Power Cycles: 0 00:07:11.849 Power On Hours: 0 hours 00:07:11.849 Unsafe Shutdowns: 0 00:07:11.849 Unrecoverable Media Errors: 0 00:07:11.849 Lifetime Error Log Entries: 0 00:07:11.849 Warning Temperature Time: 0 minutes 00:07:11.849 Critical Temperature Time: 0 minutes 00:07:11.849 00:07:11.849 Number of Queues 00:07:11.849 ================ 00:07:11.849 Number of I/O Submission Queues: 64 00:07:11.849 Number of I/O Completion Queues: 64 00:07:11.849 00:07:11.849 ZNS Specific Controller Data 00:07:11.849 ============================ 00:07:11.849 Zone Append Size Limit: 0 00:07:11.849 00:07:11.849 00:07:11.849 Active Namespaces 00:07:11.849 ================= 00:07:11.849 Namespace ID:1 00:07:11.849 Error Recovery Timeout: Unlimited 00:07:11.849 Command Set Identifier: NVM (00h) 00:07:11.849 Deallocate: Supported 00:07:11.849 Deallocated/Unwritten Error: Supported 00:07:11.849 Deallocated Read Value: All 0x00 00:07:11.849 Deallocate in Write Zeroes: Not Supported 00:07:11.849 Deallocated Guard Field: 0xFFFF 00:07:11.849 Flush: Supported 00:07:11.849 Reservation: Not Supported 00:07:11.849 Metadata Transferred as: Separate Metadata Buffer 00:07:11.849 Namespace Sharing Capabilities: Private 00:07:11.849 Size (in LBAs): 1548666 (5GiB) 00:07:11.849 Capacity (in LBAs): 1548666 (5GiB) 00:07:11.849 Utilization (in LBAs): 1548666 (5GiB) 00:07:11.849 Thin Provisioning: Not Supported 00:07:11.849 Per-NS Atomic Units: No 00:07:11.849 Maximum Single Source Range Length: 128 00:07:11.849 Maximum Copy Length: 128 00:07:11.849 Maximum Source Range Count: 128 00:07:11.849 NGUID/EUI64 Never Reused: No 00:07:11.849 Namespace Write Protected: No 00:07:11.849 Number of LBA Formats: 8 00:07:11.849 Current LBA Format: LBA Format #07 00:07:11.849 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:11.849 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:11.849 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:11.849 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:11.849 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:11.849 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:11.849 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:11.849 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:11.849 00:07:11.849 NVM Specific Namespace Data 00:07:11.849 =========================== 00:07:11.849 Logical Block Storage Tag Mask: 0 00:07:11.849 Protection Information Capabilities: 00:07:11.849 16b Guard Protection Information Storage Tag Support: No 00:07:11.849 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:11.849 Storage Tag Check Read Support: No 00:07:11.849 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.849 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.849 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.849 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.849 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.849 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.849 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.849 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:11.849 17:37:01 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:11.849 17:37:01 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:12.110 ===================================================== 00:07:12.110 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:12.110 ===================================================== 00:07:12.110 Controller Capabilities/Features 00:07:12.110 ================================ 00:07:12.110 Vendor ID: 1b36 00:07:12.110 Subsystem Vendor ID: 1af4 00:07:12.110 Serial Number: 12341 00:07:12.110 Model Number: QEMU NVMe Ctrl 00:07:12.110 Firmware Version: 8.0.0 00:07:12.110 Recommended Arb Burst: 6 00:07:12.110 IEEE OUI Identifier: 00 54 52 00:07:12.110 Multi-path I/O 00:07:12.110 May have multiple subsystem ports: No 00:07:12.110 May have multiple controllers: No 00:07:12.110 Associated with SR-IOV VF: No 00:07:12.110 Max Data Transfer Size: 524288 00:07:12.110 Max Number of Namespaces: 256 00:07:12.110 Max Number of I/O Queues: 64 00:07:12.110 NVMe Specification Version (VS): 1.4 00:07:12.110 NVMe Specification Version (Identify): 1.4 00:07:12.110 Maximum Queue Entries: 2048 00:07:12.110 Contiguous Queues Required: Yes 00:07:12.110 Arbitration Mechanisms Supported 00:07:12.110 Weighted Round Robin: Not Supported 00:07:12.110 Vendor Specific: Not Supported 00:07:12.110 Reset Timeout: 7500 ms 00:07:12.110 Doorbell Stride: 4 bytes 00:07:12.110 NVM Subsystem Reset: Not Supported 00:07:12.110 Command Sets Supported 00:07:12.110 NVM Command Set: Supported 00:07:12.110 Boot Partition: Not Supported 00:07:12.110 Memory Page Size Minimum: 4096 bytes 00:07:12.110 Memory Page Size Maximum: 65536 bytes 00:07:12.110 Persistent Memory Region: Not Supported 00:07:12.110 Optional Asynchronous Events Supported 00:07:12.110 Namespace Attribute Notices: Supported 00:07:12.110 Firmware Activation Notices: Not Supported 00:07:12.110 ANA Change Notices: Not Supported 00:07:12.110 PLE Aggregate Log Change Notices: Not Supported 00:07:12.110 LBA Status Info Alert Notices: Not Supported 00:07:12.110 EGE Aggregate Log Change Notices: Not Supported 00:07:12.110 Normal NVM Subsystem Shutdown event: Not Supported 00:07:12.110 Zone Descriptor Change Notices: Not Supported 00:07:12.110 Discovery Log Change Notices: Not Supported 00:07:12.110 Controller Attributes 00:07:12.110 128-bit Host Identifier: Not Supported 00:07:12.110 Non-Operational Permissive Mode: Not Supported 00:07:12.110 NVM Sets: Not Supported 00:07:12.110 Read Recovery Levels: Not Supported 00:07:12.110 Endurance Groups: Not Supported 00:07:12.110 Predictable Latency Mode: Not Supported 00:07:12.110 Traffic Based Keep ALive: Not Supported 00:07:12.110 Namespace Granularity: Not Supported 00:07:12.110 SQ Associations: Not Supported 00:07:12.110 UUID List: Not Supported 00:07:12.110 Multi-Domain Subsystem: Not Supported 00:07:12.110 Fixed Capacity Management: Not Supported 00:07:12.110 Variable Capacity Management: Not Supported 00:07:12.110 Delete Endurance Group: Not Supported 00:07:12.110 Delete NVM Set: Not Supported 00:07:12.110 Extended LBA Formats Supported: Supported 00:07:12.110 Flexible Data Placement Supported: Not Supported 00:07:12.110 00:07:12.110 Controller Memory Buffer Support 00:07:12.110 ================================ 00:07:12.110 Supported: No 00:07:12.110 00:07:12.110 Persistent Memory Region Support 00:07:12.110 ================================ 00:07:12.110 Supported: No 00:07:12.110 00:07:12.110 Admin Command Set Attributes 00:07:12.110 ============================ 00:07:12.110 Security Send/Receive: Not Supported 00:07:12.110 Format NVM: Supported 00:07:12.110 Firmware Activate/Download: Not Supported 00:07:12.110 Namespace Management: Supported 00:07:12.110 Device Self-Test: Not Supported 00:07:12.110 Directives: Supported 00:07:12.110 NVMe-MI: Not Supported 00:07:12.110 Virtualization Management: Not Supported 00:07:12.110 Doorbell Buffer Config: Supported 00:07:12.110 Get LBA Status Capability: Not Supported 00:07:12.110 Command & Feature Lockdown Capability: Not Supported 00:07:12.110 Abort Command Limit: 4 00:07:12.110 Async Event Request Limit: 4 00:07:12.110 Number of Firmware Slots: N/A 00:07:12.110 Firmware Slot 1 Read-Only: N/A 00:07:12.110 Firmware Activation Without Reset: N/A 00:07:12.110 Multiple Update Detection Support: N/A 00:07:12.110 Firmware Update Granularity: No Information Provided 00:07:12.110 Per-Namespace SMART Log: Yes 00:07:12.110 Asymmetric Namespace Access Log Page: Not Supported 00:07:12.110 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:12.110 Command Effects Log Page: Supported 00:07:12.110 Get Log Page Extended Data: Supported 00:07:12.110 Telemetry Log Pages: Not Supported 00:07:12.110 Persistent Event Log Pages: Not Supported 00:07:12.110 Supported Log Pages Log Page: May Support 00:07:12.110 Commands Supported & Effects Log Page: Not Supported 00:07:12.110 Feature Identifiers & Effects Log Page:May Support 00:07:12.110 NVMe-MI Commands & Effects Log Page: May Support 00:07:12.110 Data Area 4 for Telemetry Log: Not Supported 00:07:12.110 Error Log Page Entries Supported: 1 00:07:12.110 Keep Alive: Not Supported 00:07:12.110 00:07:12.110 NVM Command Set Attributes 00:07:12.110 ========================== 00:07:12.110 Submission Queue Entry Size 00:07:12.110 Max: 64 00:07:12.110 Min: 64 00:07:12.110 Completion Queue Entry Size 00:07:12.110 Max: 16 00:07:12.110 Min: 16 00:07:12.110 Number of Namespaces: 256 00:07:12.110 Compare Command: Supported 00:07:12.110 Write Uncorrectable Command: Not Supported 00:07:12.110 Dataset Management Command: Supported 00:07:12.110 Write Zeroes Command: Supported 00:07:12.110 Set Features Save Field: Supported 00:07:12.110 Reservations: Not Supported 00:07:12.110 Timestamp: Supported 00:07:12.110 Copy: Supported 00:07:12.110 Volatile Write Cache: Present 00:07:12.110 Atomic Write Unit (Normal): 1 00:07:12.110 Atomic Write Unit (PFail): 1 00:07:12.110 Atomic Compare & Write Unit: 1 00:07:12.110 Fused Compare & Write: Not Supported 00:07:12.110 Scatter-Gather List 00:07:12.110 SGL Command Set: Supported 00:07:12.110 SGL Keyed: Not Supported 00:07:12.110 SGL Bit Bucket Descriptor: Not Supported 00:07:12.110 SGL Metadata Pointer: Not Supported 00:07:12.110 Oversized SGL: Not Supported 00:07:12.110 SGL Metadata Address: Not Supported 00:07:12.110 SGL Offset: Not Supported 00:07:12.110 Transport SGL Data Block: Not Supported 00:07:12.110 Replay Protected Memory Block: Not Supported 00:07:12.110 00:07:12.110 Firmware Slot Information 00:07:12.110 ========================= 00:07:12.110 Active slot: 1 00:07:12.110 Slot 1 Firmware Revision: 1.0 00:07:12.110 00:07:12.110 00:07:12.110 Commands Supported and Effects 00:07:12.110 ============================== 00:07:12.110 Admin Commands 00:07:12.110 -------------- 00:07:12.110 Delete I/O Submission Queue (00h): Supported 00:07:12.111 Create I/O Submission Queue (01h): Supported 00:07:12.111 Get Log Page (02h): Supported 00:07:12.111 Delete I/O Completion Queue (04h): Supported 00:07:12.111 Create I/O Completion Queue (05h): Supported 00:07:12.111 Identify (06h): Supported 00:07:12.111 Abort (08h): Supported 00:07:12.111 Set Features (09h): Supported 00:07:12.111 Get Features (0Ah): Supported 00:07:12.111 Asynchronous Event Request (0Ch): Supported 00:07:12.111 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:12.111 Directive Send (19h): Supported 00:07:12.111 Directive Receive (1Ah): Supported 00:07:12.111 Virtualization Management (1Ch): Supported 00:07:12.111 Doorbell Buffer Config (7Ch): Supported 00:07:12.111 Format NVM (80h): Supported LBA-Change 00:07:12.111 I/O Commands 00:07:12.111 ------------ 00:07:12.111 Flush (00h): Supported LBA-Change 00:07:12.111 Write (01h): Supported LBA-Change 00:07:12.111 Read (02h): Supported 00:07:12.111 Compare (05h): Supported 00:07:12.111 Write Zeroes (08h): Supported LBA-Change 00:07:12.111 Dataset Management (09h): Supported LBA-Change 00:07:12.111 Unknown (0Ch): Supported 00:07:12.111 Unknown (12h): Supported 00:07:12.111 Copy (19h): Supported LBA-Change 00:07:12.111 Unknown (1Dh): Supported LBA-Change 00:07:12.111 00:07:12.111 Error Log 00:07:12.111 ========= 00:07:12.111 00:07:12.111 Arbitration 00:07:12.111 =========== 00:07:12.111 Arbitration Burst: no limit 00:07:12.111 00:07:12.111 Power Management 00:07:12.111 ================ 00:07:12.111 Number of Power States: 1 00:07:12.111 Current Power State: Power State #0 00:07:12.111 Power State #0: 00:07:12.111 Max Power: 25.00 W 00:07:12.111 Non-Operational State: Operational 00:07:12.111 Entry Latency: 16 microseconds 00:07:12.111 Exit Latency: 4 microseconds 00:07:12.111 Relative Read Throughput: 0 00:07:12.111 Relative Read Latency: 0 00:07:12.111 Relative Write Throughput: 0 00:07:12.111 Relative Write Latency: 0 00:07:12.111 Idle Power: Not Reported 00:07:12.111 Active Power: Not Reported 00:07:12.111 Non-Operational Permissive Mode: Not Supported 00:07:12.111 00:07:12.111 Health Information 00:07:12.111 ================== 00:07:12.111 Critical Warnings: 00:07:12.111 Available Spare Space: OK 00:07:12.111 Temperature: OK 00:07:12.111 Device Reliability: OK 00:07:12.111 Read Only: No 00:07:12.111 Volatile Memory Backup: OK 00:07:12.111 Current Temperature: 323 Kelvin (50 Celsius) 00:07:12.111 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:12.111 Available Spare: 0% 00:07:12.111 Available Spare Threshold: 0% 00:07:12.111 Life Percentage Used: 0% 00:07:12.111 Data Units Read: 994 00:07:12.111 Data Units Written: 867 00:07:12.111 Host Read Commands: 57813 00:07:12.111 Host Write Commands: 56701 00:07:12.111 Controller Busy Time: 0 minutes 00:07:12.111 Power Cycles: 0 00:07:12.111 Power On Hours: 0 hours 00:07:12.111 Unsafe Shutdowns: 0 00:07:12.111 Unrecoverable Media Errors: 0 00:07:12.111 Lifetime Error Log Entries: 0 00:07:12.111 Warning Temperature Time: 0 minutes 00:07:12.111 Critical Temperature Time: 0 minutes 00:07:12.111 00:07:12.111 Number of Queues 00:07:12.111 ================ 00:07:12.111 Number of I/O Submission Queues: 64 00:07:12.111 Number of I/O Completion Queues: 64 00:07:12.111 00:07:12.111 ZNS Specific Controller Data 00:07:12.111 ============================ 00:07:12.111 Zone Append Size Limit: 0 00:07:12.111 00:07:12.111 00:07:12.111 Active Namespaces 00:07:12.111 ================= 00:07:12.111 Namespace ID:1 00:07:12.111 Error Recovery Timeout: Unlimited 00:07:12.111 Command Set Identifier: NVM (00h) 00:07:12.111 Deallocate: Supported 00:07:12.111 Deallocated/Unwritten Error: Supported 00:07:12.111 Deallocated Read Value: All 0x00 00:07:12.111 Deallocate in Write Zeroes: Not Supported 00:07:12.111 Deallocated Guard Field: 0xFFFF 00:07:12.111 Flush: Supported 00:07:12.111 Reservation: Not Supported 00:07:12.111 Namespace Sharing Capabilities: Private 00:07:12.111 Size (in LBAs): 1310720 (5GiB) 00:07:12.111 Capacity (in LBAs): 1310720 (5GiB) 00:07:12.111 Utilization (in LBAs): 1310720 (5GiB) 00:07:12.111 Thin Provisioning: Not Supported 00:07:12.111 Per-NS Atomic Units: No 00:07:12.111 Maximum Single Source Range Length: 128 00:07:12.111 Maximum Copy Length: 128 00:07:12.111 Maximum Source Range Count: 128 00:07:12.111 NGUID/EUI64 Never Reused: No 00:07:12.111 Namespace Write Protected: No 00:07:12.111 Number of LBA Formats: 8 00:07:12.111 Current LBA Format: LBA Format #04 00:07:12.111 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:12.111 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:12.111 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:12.111 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:12.111 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:12.111 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:12.111 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:12.111 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:12.111 00:07:12.111 NVM Specific Namespace Data 00:07:12.111 =========================== 00:07:12.111 Logical Block Storage Tag Mask: 0 00:07:12.111 Protection Information Capabilities: 00:07:12.111 16b Guard Protection Information Storage Tag Support: No 00:07:12.111 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:12.111 Storage Tag Check Read Support: No 00:07:12.111 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.111 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.111 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.111 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.111 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.111 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.111 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.111 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.111 17:37:01 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:12.111 17:37:01 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:12.382 ===================================================== 00:07:12.382 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:12.382 ===================================================== 00:07:12.383 Controller Capabilities/Features 00:07:12.383 ================================ 00:07:12.383 Vendor ID: 1b36 00:07:12.383 Subsystem Vendor ID: 1af4 00:07:12.383 Serial Number: 12342 00:07:12.383 Model Number: QEMU NVMe Ctrl 00:07:12.383 Firmware Version: 8.0.0 00:07:12.383 Recommended Arb Burst: 6 00:07:12.383 IEEE OUI Identifier: 00 54 52 00:07:12.383 Multi-path I/O 00:07:12.383 May have multiple subsystem ports: No 00:07:12.383 May have multiple controllers: No 00:07:12.383 Associated with SR-IOV VF: No 00:07:12.383 Max Data Transfer Size: 524288 00:07:12.383 Max Number of Namespaces: 256 00:07:12.383 Max Number of I/O Queues: 64 00:07:12.383 NVMe Specification Version (VS): 1.4 00:07:12.383 NVMe Specification Version (Identify): 1.4 00:07:12.383 Maximum Queue Entries: 2048 00:07:12.383 Contiguous Queues Required: Yes 00:07:12.383 Arbitration Mechanisms Supported 00:07:12.383 Weighted Round Robin: Not Supported 00:07:12.383 Vendor Specific: Not Supported 00:07:12.383 Reset Timeout: 7500 ms 00:07:12.383 Doorbell Stride: 4 bytes 00:07:12.383 NVM Subsystem Reset: Not Supported 00:07:12.383 Command Sets Supported 00:07:12.383 NVM Command Set: Supported 00:07:12.383 Boot Partition: Not Supported 00:07:12.383 Memory Page Size Minimum: 4096 bytes 00:07:12.383 Memory Page Size Maximum: 65536 bytes 00:07:12.383 Persistent Memory Region: Not Supported 00:07:12.383 Optional Asynchronous Events Supported 00:07:12.383 Namespace Attribute Notices: Supported 00:07:12.383 Firmware Activation Notices: Not Supported 00:07:12.383 ANA Change Notices: Not Supported 00:07:12.383 PLE Aggregate Log Change Notices: Not Supported 00:07:12.383 LBA Status Info Alert Notices: Not Supported 00:07:12.383 EGE Aggregate Log Change Notices: Not Supported 00:07:12.383 Normal NVM Subsystem Shutdown event: Not Supported 00:07:12.383 Zone Descriptor Change Notices: Not Supported 00:07:12.383 Discovery Log Change Notices: Not Supported 00:07:12.383 Controller Attributes 00:07:12.383 128-bit Host Identifier: Not Supported 00:07:12.383 Non-Operational Permissive Mode: Not Supported 00:07:12.383 NVM Sets: Not Supported 00:07:12.383 Read Recovery Levels: Not Supported 00:07:12.383 Endurance Groups: Not Supported 00:07:12.383 Predictable Latency Mode: Not Supported 00:07:12.383 Traffic Based Keep ALive: Not Supported 00:07:12.383 Namespace Granularity: Not Supported 00:07:12.383 SQ Associations: Not Supported 00:07:12.383 UUID List: Not Supported 00:07:12.383 Multi-Domain Subsystem: Not Supported 00:07:12.383 Fixed Capacity Management: Not Supported 00:07:12.383 Variable Capacity Management: Not Supported 00:07:12.383 Delete Endurance Group: Not Supported 00:07:12.383 Delete NVM Set: Not Supported 00:07:12.383 Extended LBA Formats Supported: Supported 00:07:12.383 Flexible Data Placement Supported: Not Supported 00:07:12.383 00:07:12.383 Controller Memory Buffer Support 00:07:12.383 ================================ 00:07:12.383 Supported: No 00:07:12.383 00:07:12.383 Persistent Memory Region Support 00:07:12.383 ================================ 00:07:12.383 Supported: No 00:07:12.383 00:07:12.383 Admin Command Set Attributes 00:07:12.383 ============================ 00:07:12.383 Security Send/Receive: Not Supported 00:07:12.383 Format NVM: Supported 00:07:12.383 Firmware Activate/Download: Not Supported 00:07:12.383 Namespace Management: Supported 00:07:12.383 Device Self-Test: Not Supported 00:07:12.383 Directives: Supported 00:07:12.383 NVMe-MI: Not Supported 00:07:12.383 Virtualization Management: Not Supported 00:07:12.383 Doorbell Buffer Config: Supported 00:07:12.383 Get LBA Status Capability: Not Supported 00:07:12.383 Command & Feature Lockdown Capability: Not Supported 00:07:12.383 Abort Command Limit: 4 00:07:12.383 Async Event Request Limit: 4 00:07:12.383 Number of Firmware Slots: N/A 00:07:12.383 Firmware Slot 1 Read-Only: N/A 00:07:12.383 Firmware Activation Without Reset: N/A 00:07:12.383 Multiple Update Detection Support: N/A 00:07:12.383 Firmware Update Granularity: No Information Provided 00:07:12.383 Per-Namespace SMART Log: Yes 00:07:12.383 Asymmetric Namespace Access Log Page: Not Supported 00:07:12.383 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:12.383 Command Effects Log Page: Supported 00:07:12.383 Get Log Page Extended Data: Supported 00:07:12.383 Telemetry Log Pages: Not Supported 00:07:12.383 Persistent Event Log Pages: Not Supported 00:07:12.383 Supported Log Pages Log Page: May Support 00:07:12.383 Commands Supported & Effects Log Page: Not Supported 00:07:12.383 Feature Identifiers & Effects Log Page:May Support 00:07:12.383 NVMe-MI Commands & Effects Log Page: May Support 00:07:12.383 Data Area 4 for Telemetry Log: Not Supported 00:07:12.383 Error Log Page Entries Supported: 1 00:07:12.383 Keep Alive: Not Supported 00:07:12.383 00:07:12.383 NVM Command Set Attributes 00:07:12.383 ========================== 00:07:12.383 Submission Queue Entry Size 00:07:12.383 Max: 64 00:07:12.383 Min: 64 00:07:12.383 Completion Queue Entry Size 00:07:12.383 Max: 16 00:07:12.383 Min: 16 00:07:12.383 Number of Namespaces: 256 00:07:12.383 Compare Command: Supported 00:07:12.383 Write Uncorrectable Command: Not Supported 00:07:12.383 Dataset Management Command: Supported 00:07:12.383 Write Zeroes Command: Supported 00:07:12.383 Set Features Save Field: Supported 00:07:12.383 Reservations: Not Supported 00:07:12.383 Timestamp: Supported 00:07:12.383 Copy: Supported 00:07:12.383 Volatile Write Cache: Present 00:07:12.383 Atomic Write Unit (Normal): 1 00:07:12.383 Atomic Write Unit (PFail): 1 00:07:12.383 Atomic Compare & Write Unit: 1 00:07:12.383 Fused Compare & Write: Not Supported 00:07:12.383 Scatter-Gather List 00:07:12.383 SGL Command Set: Supported 00:07:12.383 SGL Keyed: Not Supported 00:07:12.383 SGL Bit Bucket Descriptor: Not Supported 00:07:12.383 SGL Metadata Pointer: Not Supported 00:07:12.383 Oversized SGL: Not Supported 00:07:12.383 SGL Metadata Address: Not Supported 00:07:12.383 SGL Offset: Not Supported 00:07:12.383 Transport SGL Data Block: Not Supported 00:07:12.383 Replay Protected Memory Block: Not Supported 00:07:12.383 00:07:12.383 Firmware Slot Information 00:07:12.383 ========================= 00:07:12.383 Active slot: 1 00:07:12.383 Slot 1 Firmware Revision: 1.0 00:07:12.383 00:07:12.383 00:07:12.383 Commands Supported and Effects 00:07:12.383 ============================== 00:07:12.383 Admin Commands 00:07:12.383 -------------- 00:07:12.383 Delete I/O Submission Queue (00h): Supported 00:07:12.383 Create I/O Submission Queue (01h): Supported 00:07:12.383 Get Log Page (02h): Supported 00:07:12.383 Delete I/O Completion Queue (04h): Supported 00:07:12.383 Create I/O Completion Queue (05h): Supported 00:07:12.383 Identify (06h): Supported 00:07:12.383 Abort (08h): Supported 00:07:12.383 Set Features (09h): Supported 00:07:12.383 Get Features (0Ah): Supported 00:07:12.383 Asynchronous Event Request (0Ch): Supported 00:07:12.383 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:12.383 Directive Send (19h): Supported 00:07:12.383 Directive Receive (1Ah): Supported 00:07:12.383 Virtualization Management (1Ch): Supported 00:07:12.383 Doorbell Buffer Config (7Ch): Supported 00:07:12.383 Format NVM (80h): Supported LBA-Change 00:07:12.383 I/O Commands 00:07:12.383 ------------ 00:07:12.383 Flush (00h): Supported LBA-Change 00:07:12.383 Write (01h): Supported LBA-Change 00:07:12.383 Read (02h): Supported 00:07:12.383 Compare (05h): Supported 00:07:12.383 Write Zeroes (08h): Supported LBA-Change 00:07:12.383 Dataset Management (09h): Supported LBA-Change 00:07:12.383 Unknown (0Ch): Supported 00:07:12.383 Unknown (12h): Supported 00:07:12.383 Copy (19h): Supported LBA-Change 00:07:12.383 Unknown (1Dh): Supported LBA-Change 00:07:12.383 00:07:12.383 Error Log 00:07:12.383 ========= 00:07:12.383 00:07:12.383 Arbitration 00:07:12.383 =========== 00:07:12.383 Arbitration Burst: no limit 00:07:12.383 00:07:12.383 Power Management 00:07:12.383 ================ 00:07:12.383 Number of Power States: 1 00:07:12.383 Current Power State: Power State #0 00:07:12.383 Power State #0: 00:07:12.383 Max Power: 25.00 W 00:07:12.383 Non-Operational State: Operational 00:07:12.383 Entry Latency: 16 microseconds 00:07:12.383 Exit Latency: 4 microseconds 00:07:12.383 Relative Read Throughput: 0 00:07:12.383 Relative Read Latency: 0 00:07:12.383 Relative Write Throughput: 0 00:07:12.383 Relative Write Latency: 0 00:07:12.383 Idle Power: Not Reported 00:07:12.383 Active Power: Not Reported 00:07:12.383 Non-Operational Permissive Mode: Not Supported 00:07:12.383 00:07:12.383 Health Information 00:07:12.383 ================== 00:07:12.383 Critical Warnings: 00:07:12.383 Available Spare Space: OK 00:07:12.383 Temperature: OK 00:07:12.383 Device Reliability: OK 00:07:12.383 Read Only: No 00:07:12.383 Volatile Memory Backup: OK 00:07:12.383 Current Temperature: 323 Kelvin (50 Celsius) 00:07:12.383 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:12.383 Available Spare: 0% 00:07:12.383 Available Spare Threshold: 0% 00:07:12.383 Life Percentage Used: 0% 00:07:12.384 Data Units Read: 2287 00:07:12.384 Data Units Written: 2074 00:07:12.384 Host Read Commands: 120889 00:07:12.384 Host Write Commands: 119158 00:07:12.384 Controller Busy Time: 0 minutes 00:07:12.384 Power Cycles: 0 00:07:12.384 Power On Hours: 0 hours 00:07:12.384 Unsafe Shutdowns: 0 00:07:12.384 Unrecoverable Media Errors: 0 00:07:12.384 Lifetime Error Log Entries: 0 00:07:12.384 Warning Temperature Time: 0 minutes 00:07:12.384 Critical Temperature Time: 0 minutes 00:07:12.384 00:07:12.384 Number of Queues 00:07:12.384 ================ 00:07:12.384 Number of I/O Submission Queues: 64 00:07:12.384 Number of I/O Completion Queues: 64 00:07:12.384 00:07:12.384 ZNS Specific Controller Data 00:07:12.384 ============================ 00:07:12.384 Zone Append Size Limit: 0 00:07:12.384 00:07:12.384 00:07:12.384 Active Namespaces 00:07:12.384 ================= 00:07:12.384 Namespace ID:1 00:07:12.384 Error Recovery Timeout: Unlimited 00:07:12.384 Command Set Identifier: NVM (00h) 00:07:12.384 Deallocate: Supported 00:07:12.384 Deallocated/Unwritten Error: Supported 00:07:12.384 Deallocated Read Value: All 0x00 00:07:12.384 Deallocate in Write Zeroes: Not Supported 00:07:12.384 Deallocated Guard Field: 0xFFFF 00:07:12.384 Flush: Supported 00:07:12.384 Reservation: Not Supported 00:07:12.384 Namespace Sharing Capabilities: Private 00:07:12.384 Size (in LBAs): 1048576 (4GiB) 00:07:12.384 Capacity (in LBAs): 1048576 (4GiB) 00:07:12.384 Utilization (in LBAs): 1048576 (4GiB) 00:07:12.384 Thin Provisioning: Not Supported 00:07:12.384 Per-NS Atomic Units: No 00:07:12.384 Maximum Single Source Range Length: 128 00:07:12.384 Maximum Copy Length: 128 00:07:12.384 Maximum Source Range Count: 128 00:07:12.384 NGUID/EUI64 Never Reused: No 00:07:12.384 Namespace Write Protected: No 00:07:12.384 Number of LBA Formats: 8 00:07:12.384 Current LBA Format: LBA Format #04 00:07:12.384 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:12.384 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:12.384 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:12.384 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:12.384 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:12.384 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:12.384 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:12.384 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:12.384 00:07:12.384 NVM Specific Namespace Data 00:07:12.384 =========================== 00:07:12.384 Logical Block Storage Tag Mask: 0 00:07:12.384 Protection Information Capabilities: 00:07:12.384 16b Guard Protection Information Storage Tag Support: No 00:07:12.384 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:12.384 Storage Tag Check Read Support: No 00:07:12.384 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.384 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.384 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.384 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.384 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.384 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.384 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.384 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.384 Namespace ID:2 00:07:12.384 Error Recovery Timeout: Unlimited 00:07:12.384 Command Set Identifier: NVM (00h) 00:07:12.384 Deallocate: Supported 00:07:12.384 Deallocated/Unwritten Error: Supported 00:07:12.384 Deallocated Read Value: All 0x00 00:07:12.384 Deallocate in Write Zeroes: Not Supported 00:07:12.384 Deallocated Guard Field: 0xFFFF 00:07:12.384 Flush: Supported 00:07:12.384 Reservation: Not Supported 00:07:12.384 Namespace Sharing Capabilities: Private 00:07:12.384 Size (in LBAs): 1048576 (4GiB) 00:07:12.384 Capacity (in LBAs): 1048576 (4GiB) 00:07:12.384 Utilization (in LBAs): 1048576 (4GiB) 00:07:12.384 Thin Provisioning: Not Supported 00:07:12.384 Per-NS Atomic Units: No 00:07:12.384 Maximum Single Source Range Length: 128 00:07:12.384 Maximum Copy Length: 128 00:07:12.384 Maximum Source Range Count: 128 00:07:12.384 NGUID/EUI64 Never Reused: No 00:07:12.384 Namespace Write Protected: No 00:07:12.384 Number of LBA Formats: 8 00:07:12.384 Current LBA Format: LBA Format #04 00:07:12.384 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:12.384 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:12.384 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:12.384 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:12.384 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:12.384 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:12.384 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:12.384 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:12.384 00:07:12.384 NVM Specific Namespace Data 00:07:12.384 =========================== 00:07:12.384 Logical Block Storage Tag Mask: 0 00:07:12.384 Protection Information Capabilities: 00:07:12.384 16b Guard Protection Information Storage Tag Support: No 00:07:12.384 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:12.384 Storage Tag Check Read Support: No 00:07:12.384 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.384 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.384 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.384 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.384 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.384 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.384 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.384 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.384 Namespace ID:3 00:07:12.384 Error Recovery Timeout: Unlimited 00:07:12.384 Command Set Identifier: NVM (00h) 00:07:12.384 Deallocate: Supported 00:07:12.384 Deallocated/Unwritten Error: Supported 00:07:12.384 Deallocated Read Value: All 0x00 00:07:12.384 Deallocate in Write Zeroes: Not Supported 00:07:12.384 Deallocated Guard Field: 0xFFFF 00:07:12.384 Flush: Supported 00:07:12.384 Reservation: Not Supported 00:07:12.384 Namespace Sharing Capabilities: Private 00:07:12.384 Size (in LBAs): 1048576 (4GiB) 00:07:12.384 Capacity (in LBAs): 1048576 (4GiB) 00:07:12.384 Utilization (in LBAs): 1048576 (4GiB) 00:07:12.384 Thin Provisioning: Not Supported 00:07:12.384 Per-NS Atomic Units: No 00:07:12.384 Maximum Single Source Range Length: 128 00:07:12.384 Maximum Copy Length: 128 00:07:12.384 Maximum Source Range Count: 128 00:07:12.384 NGUID/EUI64 Never Reused: No 00:07:12.384 Namespace Write Protected: No 00:07:12.384 Number of LBA Formats: 8 00:07:12.384 Current LBA Format: LBA Format #04 00:07:12.384 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:12.384 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:12.384 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:12.384 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:12.384 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:12.384 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:12.384 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:12.384 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:12.384 00:07:12.384 NVM Specific Namespace Data 00:07:12.384 =========================== 00:07:12.384 Logical Block Storage Tag Mask: 0 00:07:12.384 Protection Information Capabilities: 00:07:12.384 16b Guard Protection Information Storage Tag Support: No 00:07:12.384 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:12.384 Storage Tag Check Read Support: No 00:07:12.384 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.384 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.384 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.384 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.384 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.384 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.384 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.384 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.384 17:37:02 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:12.384 17:37:02 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:12.644 ===================================================== 00:07:12.644 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:12.644 ===================================================== 00:07:12.644 Controller Capabilities/Features 00:07:12.644 ================================ 00:07:12.644 Vendor ID: 1b36 00:07:12.644 Subsystem Vendor ID: 1af4 00:07:12.644 Serial Number: 12343 00:07:12.644 Model Number: QEMU NVMe Ctrl 00:07:12.644 Firmware Version: 8.0.0 00:07:12.644 Recommended Arb Burst: 6 00:07:12.644 IEEE OUI Identifier: 00 54 52 00:07:12.644 Multi-path I/O 00:07:12.644 May have multiple subsystem ports: No 00:07:12.644 May have multiple controllers: Yes 00:07:12.644 Associated with SR-IOV VF: No 00:07:12.644 Max Data Transfer Size: 524288 00:07:12.644 Max Number of Namespaces: 256 00:07:12.644 Max Number of I/O Queues: 64 00:07:12.644 NVMe Specification Version (VS): 1.4 00:07:12.644 NVMe Specification Version (Identify): 1.4 00:07:12.644 Maximum Queue Entries: 2048 00:07:12.644 Contiguous Queues Required: Yes 00:07:12.644 Arbitration Mechanisms Supported 00:07:12.644 Weighted Round Robin: Not Supported 00:07:12.644 Vendor Specific: Not Supported 00:07:12.644 Reset Timeout: 7500 ms 00:07:12.644 Doorbell Stride: 4 bytes 00:07:12.644 NVM Subsystem Reset: Not Supported 00:07:12.644 Command Sets Supported 00:07:12.644 NVM Command Set: Supported 00:07:12.644 Boot Partition: Not Supported 00:07:12.644 Memory Page Size Minimum: 4096 bytes 00:07:12.644 Memory Page Size Maximum: 65536 bytes 00:07:12.644 Persistent Memory Region: Not Supported 00:07:12.644 Optional Asynchronous Events Supported 00:07:12.644 Namespace Attribute Notices: Supported 00:07:12.644 Firmware Activation Notices: Not Supported 00:07:12.644 ANA Change Notices: Not Supported 00:07:12.644 PLE Aggregate Log Change Notices: Not Supported 00:07:12.644 LBA Status Info Alert Notices: Not Supported 00:07:12.644 EGE Aggregate Log Change Notices: Not Supported 00:07:12.644 Normal NVM Subsystem Shutdown event: Not Supported 00:07:12.644 Zone Descriptor Change Notices: Not Supported 00:07:12.644 Discovery Log Change Notices: Not Supported 00:07:12.644 Controller Attributes 00:07:12.644 128-bit Host Identifier: Not Supported 00:07:12.644 Non-Operational Permissive Mode: Not Supported 00:07:12.644 NVM Sets: Not Supported 00:07:12.644 Read Recovery Levels: Not Supported 00:07:12.644 Endurance Groups: Supported 00:07:12.644 Predictable Latency Mode: Not Supported 00:07:12.644 Traffic Based Keep ALive: Not Supported 00:07:12.644 Namespace Granularity: Not Supported 00:07:12.644 SQ Associations: Not Supported 00:07:12.644 UUID List: Not Supported 00:07:12.644 Multi-Domain Subsystem: Not Supported 00:07:12.644 Fixed Capacity Management: Not Supported 00:07:12.644 Variable Capacity Management: Not Supported 00:07:12.644 Delete Endurance Group: Not Supported 00:07:12.644 Delete NVM Set: Not Supported 00:07:12.644 Extended LBA Formats Supported: Supported 00:07:12.644 Flexible Data Placement Supported: Supported 00:07:12.644 00:07:12.644 Controller Memory Buffer Support 00:07:12.644 ================================ 00:07:12.644 Supported: No 00:07:12.644 00:07:12.644 Persistent Memory Region Support 00:07:12.644 ================================ 00:07:12.644 Supported: No 00:07:12.644 00:07:12.644 Admin Command Set Attributes 00:07:12.644 ============================ 00:07:12.644 Security Send/Receive: Not Supported 00:07:12.644 Format NVM: Supported 00:07:12.644 Firmware Activate/Download: Not Supported 00:07:12.644 Namespace Management: Supported 00:07:12.644 Device Self-Test: Not Supported 00:07:12.644 Directives: Supported 00:07:12.644 NVMe-MI: Not Supported 00:07:12.644 Virtualization Management: Not Supported 00:07:12.644 Doorbell Buffer Config: Supported 00:07:12.644 Get LBA Status Capability: Not Supported 00:07:12.644 Command & Feature Lockdown Capability: Not Supported 00:07:12.644 Abort Command Limit: 4 00:07:12.644 Async Event Request Limit: 4 00:07:12.644 Number of Firmware Slots: N/A 00:07:12.644 Firmware Slot 1 Read-Only: N/A 00:07:12.644 Firmware Activation Without Reset: N/A 00:07:12.644 Multiple Update Detection Support: N/A 00:07:12.644 Firmware Update Granularity: No Information Provided 00:07:12.644 Per-Namespace SMART Log: Yes 00:07:12.644 Asymmetric Namespace Access Log Page: Not Supported 00:07:12.644 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:12.644 Command Effects Log Page: Supported 00:07:12.644 Get Log Page Extended Data: Supported 00:07:12.644 Telemetry Log Pages: Not Supported 00:07:12.644 Persistent Event Log Pages: Not Supported 00:07:12.644 Supported Log Pages Log Page: May Support 00:07:12.644 Commands Supported & Effects Log Page: Not Supported 00:07:12.644 Feature Identifiers & Effects Log Page:May Support 00:07:12.644 NVMe-MI Commands & Effects Log Page: May Support 00:07:12.644 Data Area 4 for Telemetry Log: Not Supported 00:07:12.644 Error Log Page Entries Supported: 1 00:07:12.644 Keep Alive: Not Supported 00:07:12.644 00:07:12.644 NVM Command Set Attributes 00:07:12.644 ========================== 00:07:12.644 Submission Queue Entry Size 00:07:12.644 Max: 64 00:07:12.644 Min: 64 00:07:12.644 Completion Queue Entry Size 00:07:12.644 Max: 16 00:07:12.644 Min: 16 00:07:12.644 Number of Namespaces: 256 00:07:12.644 Compare Command: Supported 00:07:12.644 Write Uncorrectable Command: Not Supported 00:07:12.644 Dataset Management Command: Supported 00:07:12.644 Write Zeroes Command: Supported 00:07:12.644 Set Features Save Field: Supported 00:07:12.644 Reservations: Not Supported 00:07:12.644 Timestamp: Supported 00:07:12.644 Copy: Supported 00:07:12.644 Volatile Write Cache: Present 00:07:12.644 Atomic Write Unit (Normal): 1 00:07:12.644 Atomic Write Unit (PFail): 1 00:07:12.644 Atomic Compare & Write Unit: 1 00:07:12.644 Fused Compare & Write: Not Supported 00:07:12.644 Scatter-Gather List 00:07:12.644 SGL Command Set: Supported 00:07:12.644 SGL Keyed: Not Supported 00:07:12.644 SGL Bit Bucket Descriptor: Not Supported 00:07:12.644 SGL Metadata Pointer: Not Supported 00:07:12.644 Oversized SGL: Not Supported 00:07:12.644 SGL Metadata Address: Not Supported 00:07:12.644 SGL Offset: Not Supported 00:07:12.644 Transport SGL Data Block: Not Supported 00:07:12.644 Replay Protected Memory Block: Not Supported 00:07:12.644 00:07:12.644 Firmware Slot Information 00:07:12.644 ========================= 00:07:12.644 Active slot: 1 00:07:12.644 Slot 1 Firmware Revision: 1.0 00:07:12.644 00:07:12.644 00:07:12.644 Commands Supported and Effects 00:07:12.644 ============================== 00:07:12.644 Admin Commands 00:07:12.644 -------------- 00:07:12.644 Delete I/O Submission Queue (00h): Supported 00:07:12.644 Create I/O Submission Queue (01h): Supported 00:07:12.644 Get Log Page (02h): Supported 00:07:12.644 Delete I/O Completion Queue (04h): Supported 00:07:12.644 Create I/O Completion Queue (05h): Supported 00:07:12.644 Identify (06h): Supported 00:07:12.644 Abort (08h): Supported 00:07:12.644 Set Features (09h): Supported 00:07:12.644 Get Features (0Ah): Supported 00:07:12.644 Asynchronous Event Request (0Ch): Supported 00:07:12.644 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:12.644 Directive Send (19h): Supported 00:07:12.644 Directive Receive (1Ah): Supported 00:07:12.644 Virtualization Management (1Ch): Supported 00:07:12.645 Doorbell Buffer Config (7Ch): Supported 00:07:12.645 Format NVM (80h): Supported LBA-Change 00:07:12.645 I/O Commands 00:07:12.645 ------------ 00:07:12.645 Flush (00h): Supported LBA-Change 00:07:12.645 Write (01h): Supported LBA-Change 00:07:12.645 Read (02h): Supported 00:07:12.645 Compare (05h): Supported 00:07:12.645 Write Zeroes (08h): Supported LBA-Change 00:07:12.645 Dataset Management (09h): Supported LBA-Change 00:07:12.645 Unknown (0Ch): Supported 00:07:12.645 Unknown (12h): Supported 00:07:12.645 Copy (19h): Supported LBA-Change 00:07:12.645 Unknown (1Dh): Supported LBA-Change 00:07:12.645 00:07:12.645 Error Log 00:07:12.645 ========= 00:07:12.645 00:07:12.645 Arbitration 00:07:12.645 =========== 00:07:12.645 Arbitration Burst: no limit 00:07:12.645 00:07:12.645 Power Management 00:07:12.645 ================ 00:07:12.645 Number of Power States: 1 00:07:12.645 Current Power State: Power State #0 00:07:12.645 Power State #0: 00:07:12.645 Max Power: 25.00 W 00:07:12.645 Non-Operational State: Operational 00:07:12.645 Entry Latency: 16 microseconds 00:07:12.645 Exit Latency: 4 microseconds 00:07:12.645 Relative Read Throughput: 0 00:07:12.645 Relative Read Latency: 0 00:07:12.645 Relative Write Throughput: 0 00:07:12.645 Relative Write Latency: 0 00:07:12.645 Idle Power: Not Reported 00:07:12.645 Active Power: Not Reported 00:07:12.645 Non-Operational Permissive Mode: Not Supported 00:07:12.645 00:07:12.645 Health Information 00:07:12.645 ================== 00:07:12.645 Critical Warnings: 00:07:12.645 Available Spare Space: OK 00:07:12.645 Temperature: OK 00:07:12.645 Device Reliability: OK 00:07:12.645 Read Only: No 00:07:12.645 Volatile Memory Backup: OK 00:07:12.645 Current Temperature: 323 Kelvin (50 Celsius) 00:07:12.645 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:12.645 Available Spare: 0% 00:07:12.645 Available Spare Threshold: 0% 00:07:12.645 Life Percentage Used: 0% 00:07:12.645 Data Units Read: 1110 00:07:12.645 Data Units Written: 1040 00:07:12.645 Host Read Commands: 43073 00:07:12.645 Host Write Commands: 42496 00:07:12.645 Controller Busy Time: 0 minutes 00:07:12.645 Power Cycles: 0 00:07:12.645 Power On Hours: 0 hours 00:07:12.645 Unsafe Shutdowns: 0 00:07:12.645 Unrecoverable Media Errors: 0 00:07:12.645 Lifetime Error Log Entries: 0 00:07:12.645 Warning Temperature Time: 0 minutes 00:07:12.645 Critical Temperature Time: 0 minutes 00:07:12.645 00:07:12.645 Number of Queues 00:07:12.645 ================ 00:07:12.645 Number of I/O Submission Queues: 64 00:07:12.645 Number of I/O Completion Queues: 64 00:07:12.645 00:07:12.645 ZNS Specific Controller Data 00:07:12.645 ============================ 00:07:12.645 Zone Append Size Limit: 0 00:07:12.645 00:07:12.645 00:07:12.645 Active Namespaces 00:07:12.645 ================= 00:07:12.645 Namespace ID:1 00:07:12.645 Error Recovery Timeout: Unlimited 00:07:12.645 Command Set Identifier: NVM (00h) 00:07:12.645 Deallocate: Supported 00:07:12.645 Deallocated/Unwritten Error: Supported 00:07:12.645 Deallocated Read Value: All 0x00 00:07:12.645 Deallocate in Write Zeroes: Not Supported 00:07:12.645 Deallocated Guard Field: 0xFFFF 00:07:12.645 Flush: Supported 00:07:12.645 Reservation: Not Supported 00:07:12.645 Namespace Sharing Capabilities: Multiple Controllers 00:07:12.645 Size (in LBAs): 262144 (1GiB) 00:07:12.645 Capacity (in LBAs): 262144 (1GiB) 00:07:12.645 Utilization (in LBAs): 262144 (1GiB) 00:07:12.645 Thin Provisioning: Not Supported 00:07:12.645 Per-NS Atomic Units: No 00:07:12.645 Maximum Single Source Range Length: 128 00:07:12.645 Maximum Copy Length: 128 00:07:12.645 Maximum Source Range Count: 128 00:07:12.645 NGUID/EUI64 Never Reused: No 00:07:12.645 Namespace Write Protected: No 00:07:12.645 Endurance group ID: 1 00:07:12.645 Number of LBA Formats: 8 00:07:12.645 Current LBA Format: LBA Format #04 00:07:12.645 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:12.645 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:12.645 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:12.645 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:12.645 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:12.645 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:12.645 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:12.645 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:12.645 00:07:12.645 Get Feature FDP: 00:07:12.645 ================ 00:07:12.645 Enabled: Yes 00:07:12.645 FDP configuration index: 0 00:07:12.645 00:07:12.645 FDP configurations log page 00:07:12.645 =========================== 00:07:12.645 Number of FDP configurations: 1 00:07:12.645 Version: 0 00:07:12.645 Size: 112 00:07:12.645 FDP Configuration Descriptor: 0 00:07:12.645 Descriptor Size: 96 00:07:12.645 Reclaim Group Identifier format: 2 00:07:12.645 FDP Volatile Write Cache: Not Present 00:07:12.645 FDP Configuration: Valid 00:07:12.645 Vendor Specific Size: 0 00:07:12.645 Number of Reclaim Groups: 2 00:07:12.645 Number of Recalim Unit Handles: 8 00:07:12.645 Max Placement Identifiers: 128 00:07:12.645 Number of Namespaces Suppprted: 256 00:07:12.645 Reclaim unit Nominal Size: 6000000 bytes 00:07:12.645 Estimated Reclaim Unit Time Limit: Not Reported 00:07:12.645 RUH Desc #000: RUH Type: Initially Isolated 00:07:12.645 RUH Desc #001: RUH Type: Initially Isolated 00:07:12.645 RUH Desc #002: RUH Type: Initially Isolated 00:07:12.645 RUH Desc #003: RUH Type: Initially Isolated 00:07:12.645 RUH Desc #004: RUH Type: Initially Isolated 00:07:12.645 RUH Desc #005: RUH Type: Initially Isolated 00:07:12.645 RUH Desc #006: RUH Type: Initially Isolated 00:07:12.645 RUH Desc #007: RUH Type: Initially Isolated 00:07:12.645 00:07:12.645 FDP reclaim unit handle usage log page 00:07:12.645 ====================================== 00:07:12.645 Number of Reclaim Unit Handles: 8 00:07:12.645 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:12.645 RUH Usage Desc #001: RUH Attributes: Unused 00:07:12.645 RUH Usage Desc #002: RUH Attributes: Unused 00:07:12.645 RUH Usage Desc #003: RUH Attributes: Unused 00:07:12.645 RUH Usage Desc #004: RUH Attributes: Unused 00:07:12.645 RUH Usage Desc #005: RUH Attributes: Unused 00:07:12.645 RUH Usage Desc #006: RUH Attributes: Unused 00:07:12.645 RUH Usage Desc #007: RUH Attributes: Unused 00:07:12.645 00:07:12.645 FDP statistics log page 00:07:12.645 ======================= 00:07:12.645 Host bytes with metadata written: 644718592 00:07:12.645 Media bytes with metadata written: 644919296 00:07:12.645 Media bytes erased: 0 00:07:12.645 00:07:12.645 FDP events log page 00:07:12.645 =================== 00:07:12.645 Number of FDP events: 0 00:07:12.645 00:07:12.645 NVM Specific Namespace Data 00:07:12.645 =========================== 00:07:12.645 Logical Block Storage Tag Mask: 0 00:07:12.645 Protection Information Capabilities: 00:07:12.645 16b Guard Protection Information Storage Tag Support: No 00:07:12.645 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:12.645 Storage Tag Check Read Support: No 00:07:12.645 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.645 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.645 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.645 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.645 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.645 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.645 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.645 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:12.645 00:07:12.645 real 0m1.197s 00:07:12.645 user 0m0.399s 00:07:12.645 sys 0m0.569s 00:07:12.645 17:37:02 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.645 17:37:02 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:12.645 ************************************ 00:07:12.645 END TEST nvme_identify 00:07:12.645 ************************************ 00:07:12.645 17:37:02 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:12.645 17:37:02 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:12.645 17:37:02 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.645 17:37:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:12.645 ************************************ 00:07:12.645 START TEST nvme_perf 00:07:12.645 ************************************ 00:07:12.645 17:37:02 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:07:12.645 17:37:02 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:14.023 Initializing NVMe Controllers 00:07:14.023 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:14.023 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:14.023 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:14.023 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:14.024 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:14.024 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:14.024 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:14.024 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:14.024 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:14.024 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:14.024 Initialization complete. Launching workers. 00:07:14.024 ======================================================== 00:07:14.024 Latency(us) 00:07:14.024 Device Information : IOPS MiB/s Average min max 00:07:14.024 PCIE (0000:00:10.0) NSID 1 from core 0: 15123.93 177.23 8474.21 5853.92 28186.23 00:07:14.024 PCIE (0000:00:11.0) NSID 1 from core 0: 15123.93 177.23 8461.40 5923.18 26279.75 00:07:14.024 PCIE (0000:00:13.0) NSID 1 from core 0: 15123.93 177.23 8446.96 5843.96 24853.05 00:07:14.024 PCIE (0000:00:12.0) NSID 1 from core 0: 15123.93 177.23 8432.75 5919.59 22995.52 00:07:14.024 PCIE (0000:00:12.0) NSID 2 from core 0: 15123.93 177.23 8418.33 5904.27 21879.14 00:07:14.024 PCIE (0000:00:12.0) NSID 3 from core 0: 15123.93 177.23 8403.29 5905.31 22245.18 00:07:14.024 ======================================================== 00:07:14.024 Total : 90743.57 1063.40 8439.49 5843.96 28186.23 00:07:14.024 00:07:14.024 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:14.024 ================================================================================= 00:07:14.024 1.00000% : 6099.889us 00:07:14.024 10.00000% : 6326.745us 00:07:14.024 25.00000% : 6604.012us 00:07:14.024 50.00000% : 7007.311us 00:07:14.024 75.00000% : 8015.557us 00:07:14.024 90.00000% : 14619.569us 00:07:14.024 95.00000% : 18148.431us 00:07:14.024 98.00000% : 20064.098us 00:07:14.024 99.00000% : 21374.818us 00:07:14.024 99.50000% : 22383.065us 00:07:14.024 99.90000% : 27827.594us 00:07:14.024 99.99000% : 28230.892us 00:07:14.024 99.99900% : 28230.892us 00:07:14.024 99.99990% : 28230.892us 00:07:14.024 99.99999% : 28230.892us 00:07:14.024 00:07:14.024 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:14.024 ================================================================================= 00:07:14.024 1.00000% : 6175.508us 00:07:14.024 10.00000% : 6377.157us 00:07:14.024 25.00000% : 6604.012us 00:07:14.024 50.00000% : 6956.898us 00:07:14.024 75.00000% : 8065.969us 00:07:14.024 90.00000% : 14417.920us 00:07:14.024 95.00000% : 18249.255us 00:07:14.024 98.00000% : 19963.274us 00:07:14.024 99.00000% : 20870.695us 00:07:14.024 99.50000% : 21273.994us 00:07:14.024 99.90000% : 26012.751us 00:07:14.024 99.99000% : 26416.049us 00:07:14.024 99.99900% : 26416.049us 00:07:14.024 99.99990% : 26416.049us 00:07:14.024 99.99999% : 26416.049us 00:07:14.024 00:07:14.024 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:14.024 ================================================================================= 00:07:14.024 1.00000% : 6175.508us 00:07:14.024 10.00000% : 6377.157us 00:07:14.024 25.00000% : 6604.012us 00:07:14.024 50.00000% : 6956.898us 00:07:14.024 75.00000% : 8015.557us 00:07:14.024 90.00000% : 14317.095us 00:07:14.024 95.00000% : 18148.431us 00:07:14.024 98.00000% : 19559.975us 00:07:14.024 99.00000% : 20265.748us 00:07:14.024 99.50000% : 20971.520us 00:07:14.024 99.90000% : 24500.382us 00:07:14.024 99.99000% : 24903.680us 00:07:14.024 99.99900% : 24903.680us 00:07:14.024 99.99990% : 24903.680us 00:07:14.024 99.99999% : 24903.680us 00:07:14.024 00:07:14.024 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:14.024 ================================================================================= 00:07:14.024 1.00000% : 6175.508us 00:07:14.024 10.00000% : 6377.157us 00:07:14.024 25.00000% : 6604.012us 00:07:14.024 50.00000% : 6956.898us 00:07:14.024 75.00000% : 8015.557us 00:07:14.024 90.00000% : 14115.446us 00:07:14.024 95.00000% : 18047.606us 00:07:14.024 98.00000% : 19660.800us 00:07:14.024 99.00000% : 20870.695us 00:07:14.024 99.50000% : 21475.643us 00:07:14.024 99.90000% : 22584.714us 00:07:14.024 99.99000% : 22988.012us 00:07:14.024 99.99900% : 23088.837us 00:07:14.024 99.99990% : 23088.837us 00:07:14.024 99.99999% : 23088.837us 00:07:14.024 00:07:14.024 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:14.024 ================================================================================= 00:07:14.024 1.00000% : 6175.508us 00:07:14.024 10.00000% : 6377.157us 00:07:14.024 25.00000% : 6604.012us 00:07:14.024 50.00000% : 7007.311us 00:07:14.024 75.00000% : 8015.557us 00:07:14.024 90.00000% : 14317.095us 00:07:14.024 95.00000% : 18047.606us 00:07:14.024 98.00000% : 19559.975us 00:07:14.024 99.00000% : 20971.520us 00:07:14.024 99.50000% : 21374.818us 00:07:14.024 99.90000% : 21778.117us 00:07:14.024 99.99000% : 21878.942us 00:07:14.024 99.99900% : 21979.766us 00:07:14.024 99.99990% : 21979.766us 00:07:14.024 99.99999% : 21979.766us 00:07:14.024 00:07:14.024 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:14.024 ================================================================================= 00:07:14.024 1.00000% : 6175.508us 00:07:14.024 10.00000% : 6377.157us 00:07:14.024 25.00000% : 6654.425us 00:07:14.024 50.00000% : 7007.311us 00:07:14.024 75.00000% : 8015.557us 00:07:14.024 90.00000% : 14115.446us 00:07:14.024 95.00000% : 18047.606us 00:07:14.024 98.00000% : 18955.028us 00:07:14.024 99.00000% : 20870.695us 00:07:14.024 99.50000% : 21475.643us 00:07:14.024 99.90000% : 22080.591us 00:07:14.024 99.99000% : 22282.240us 00:07:14.024 99.99900% : 22282.240us 00:07:14.024 99.99990% : 22282.240us 00:07:14.024 99.99999% : 22282.240us 00:07:14.024 00:07:14.024 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:14.024 ============================================================================== 00:07:14.024 Range in us Cumulative IO count 00:07:14.024 5847.828 - 5873.034: 0.0527% ( 8) 00:07:14.024 5873.034 - 5898.240: 0.0791% ( 4) 00:07:14.024 5898.240 - 5923.446: 0.1187% ( 6) 00:07:14.024 5923.446 - 5948.652: 0.2044% ( 13) 00:07:14.024 5948.652 - 5973.858: 0.2505% ( 7) 00:07:14.024 5973.858 - 5999.065: 0.3165% ( 10) 00:07:14.024 5999.065 - 6024.271: 0.3890% ( 11) 00:07:14.024 6024.271 - 6049.477: 0.5011% ( 17) 00:07:14.024 6049.477 - 6074.683: 0.6791% ( 27) 00:07:14.024 6074.683 - 6099.889: 1.0285% ( 53) 00:07:14.024 6099.889 - 6125.095: 1.4438% ( 63) 00:07:14.024 6125.095 - 6150.302: 1.9844% ( 82) 00:07:14.024 6150.302 - 6175.508: 2.7888% ( 122) 00:07:14.024 6175.508 - 6200.714: 3.8634% ( 163) 00:07:14.024 6200.714 - 6225.920: 5.0897% ( 186) 00:07:14.024 6225.920 - 6251.126: 6.6456% ( 236) 00:07:14.024 6251.126 - 6276.332: 8.0301% ( 210) 00:07:14.024 6276.332 - 6301.538: 9.2827% ( 190) 00:07:14.024 6301.538 - 6326.745: 10.6079% ( 201) 00:07:14.024 6326.745 - 6351.951: 12.0253% ( 215) 00:07:14.024 6351.951 - 6377.157: 13.2252% ( 182) 00:07:14.024 6377.157 - 6402.363: 14.6295% ( 213) 00:07:14.024 6402.363 - 6427.569: 15.9876% ( 206) 00:07:14.024 6427.569 - 6452.775: 17.4182% ( 217) 00:07:14.024 6452.775 - 6503.188: 20.3521% ( 445) 00:07:14.024 6503.188 - 6553.600: 23.3122% ( 449) 00:07:14.024 6553.600 - 6604.012: 26.2526% ( 446) 00:07:14.024 6604.012 - 6654.425: 29.1996% ( 447) 00:07:14.024 6654.425 - 6704.837: 32.1994% ( 455) 00:07:14.024 6704.837 - 6755.249: 35.3310% ( 475) 00:07:14.024 6755.249 - 6805.662: 38.4626% ( 475) 00:07:14.024 6805.662 - 6856.074: 41.5216% ( 464) 00:07:14.024 6856.074 - 6906.486: 44.6466% ( 474) 00:07:14.024 6906.486 - 6956.898: 47.7189% ( 466) 00:07:14.024 6956.898 - 7007.311: 50.8109% ( 469) 00:07:14.024 7007.311 - 7057.723: 53.8568% ( 462) 00:07:14.024 7057.723 - 7108.135: 57.0082% ( 478) 00:07:14.024 7108.135 - 7158.548: 60.0145% ( 456) 00:07:14.024 7158.548 - 7208.960: 62.7307% ( 412) 00:07:14.024 7208.960 - 7259.372: 64.9459% ( 336) 00:07:14.024 7259.372 - 7309.785: 66.5018% ( 236) 00:07:14.024 7309.785 - 7360.197: 67.4710% ( 147) 00:07:14.024 7360.197 - 7410.609: 68.4204% ( 144) 00:07:14.024 7410.609 - 7461.022: 69.1851% ( 116) 00:07:14.024 7461.022 - 7511.434: 69.7983% ( 93) 00:07:14.024 7511.434 - 7561.846: 70.4575% ( 100) 00:07:14.024 7561.846 - 7612.258: 71.0707% ( 93) 00:07:14.024 7612.258 - 7662.671: 71.6443% ( 87) 00:07:14.024 7662.671 - 7713.083: 72.1980% ( 84) 00:07:14.024 7713.083 - 7763.495: 72.7453% ( 83) 00:07:14.024 7763.495 - 7813.908: 73.3056% ( 85) 00:07:14.024 7813.908 - 7864.320: 73.7474% ( 67) 00:07:14.024 7864.320 - 7914.732: 74.1891% ( 67) 00:07:14.024 7914.732 - 7965.145: 74.6374% ( 68) 00:07:14.024 7965.145 - 8015.557: 75.0264% ( 59) 00:07:14.024 8015.557 - 8065.969: 75.4351% ( 62) 00:07:14.024 8065.969 - 8116.382: 75.8109% ( 57) 00:07:14.024 8116.382 - 8166.794: 76.2263% ( 63) 00:07:14.024 8166.794 - 8217.206: 76.5493% ( 49) 00:07:14.024 8217.206 - 8267.618: 76.8790% ( 50) 00:07:14.024 8267.618 - 8318.031: 77.1822% ( 46) 00:07:14.024 8318.031 - 8368.443: 77.5053% ( 49) 00:07:14.024 8368.443 - 8418.855: 77.8811% ( 57) 00:07:14.024 8418.855 - 8469.268: 78.2107% ( 50) 00:07:14.024 8469.268 - 8519.680: 78.5272% ( 48) 00:07:14.024 8519.680 - 8570.092: 78.8370% ( 47) 00:07:14.024 8570.092 - 8620.505: 79.1139% ( 42) 00:07:14.024 8620.505 - 8670.917: 79.3183% ( 31) 00:07:14.024 8670.917 - 8721.329: 79.5886% ( 41) 00:07:14.024 8721.329 - 8771.742: 79.7864% ( 30) 00:07:14.024 8771.742 - 8822.154: 79.9248% ( 21) 00:07:14.024 8822.154 - 8872.566: 80.0897% ( 25) 00:07:14.024 8872.566 - 8922.978: 80.2874% ( 30) 00:07:14.024 8922.978 - 8973.391: 80.4589% ( 26) 00:07:14.024 8973.391 - 9023.803: 80.6764% ( 33) 00:07:14.024 9023.803 - 9074.215: 80.8544% ( 27) 00:07:14.024 9074.215 - 9124.628: 81.0522% ( 30) 00:07:14.024 9124.628 - 9175.040: 81.2368% ( 28) 00:07:14.024 9175.040 - 9225.452: 81.4544% ( 33) 00:07:14.024 9225.452 - 9275.865: 81.6258% ( 26) 00:07:14.024 9275.865 - 9326.277: 81.7708% ( 22) 00:07:14.024 9326.277 - 9376.689: 81.9488% ( 27) 00:07:14.024 9376.689 - 9427.102: 82.1005% ( 23) 00:07:14.024 9427.102 - 9477.514: 82.2785% ( 27) 00:07:14.025 9477.514 - 9527.926: 82.4235% ( 22) 00:07:14.025 9527.926 - 9578.338: 82.5752% ( 23) 00:07:14.025 9578.338 - 9628.751: 82.7136% ( 21) 00:07:14.025 9628.751 - 9679.163: 82.8455% ( 20) 00:07:14.025 9679.163 - 9729.575: 82.9641% ( 18) 00:07:14.025 9729.575 - 9779.988: 83.1224% ( 24) 00:07:14.025 9779.988 - 9830.400: 83.2344% ( 17) 00:07:14.025 9830.400 - 9880.812: 83.3597% ( 19) 00:07:14.025 9880.812 - 9931.225: 83.4652% ( 16) 00:07:14.025 9931.225 - 9981.637: 83.6036% ( 21) 00:07:14.025 9981.637 - 10032.049: 83.6959% ( 14) 00:07:14.025 10032.049 - 10082.462: 83.7948% ( 15) 00:07:14.025 10082.462 - 10132.874: 83.9135% ( 18) 00:07:14.025 10132.874 - 10183.286: 83.9926% ( 12) 00:07:14.025 10183.286 - 10233.698: 84.0849% ( 14) 00:07:14.025 10233.698 - 10284.111: 84.1508% ( 10) 00:07:14.025 10284.111 - 10334.523: 84.2234% ( 11) 00:07:14.025 10334.523 - 10384.935: 84.3025% ( 12) 00:07:14.025 10384.935 - 10435.348: 84.3816% ( 12) 00:07:14.025 10435.348 - 10485.760: 84.4541% ( 11) 00:07:14.025 10485.760 - 10536.172: 84.5464% ( 14) 00:07:14.025 10536.172 - 10586.585: 84.6255% ( 12) 00:07:14.025 10586.585 - 10636.997: 84.7046% ( 12) 00:07:14.025 10636.997 - 10687.409: 84.7706% ( 10) 00:07:14.025 10687.409 - 10737.822: 84.8365% ( 10) 00:07:14.025 10737.822 - 10788.234: 84.9156% ( 12) 00:07:14.025 10788.234 - 10838.646: 85.0079% ( 14) 00:07:14.025 10838.646 - 10889.058: 85.0936% ( 13) 00:07:14.025 10889.058 - 10939.471: 85.1859% ( 14) 00:07:14.025 10939.471 - 10989.883: 85.2650% ( 12) 00:07:14.025 10989.883 - 11040.295: 85.3639% ( 15) 00:07:14.025 11040.295 - 11090.708: 85.4760% ( 17) 00:07:14.025 11090.708 - 11141.120: 85.6013% ( 19) 00:07:14.025 11141.120 - 11191.532: 85.6936% ( 14) 00:07:14.025 11191.532 - 11241.945: 85.8188% ( 19) 00:07:14.025 11241.945 - 11292.357: 85.9045% ( 13) 00:07:14.025 11292.357 - 11342.769: 86.0034% ( 15) 00:07:14.025 11342.769 - 11393.182: 86.0825% ( 12) 00:07:14.025 11393.182 - 11443.594: 86.1617% ( 12) 00:07:14.025 11443.594 - 11494.006: 86.2671% ( 16) 00:07:14.025 11494.006 - 11544.418: 86.3133% ( 7) 00:07:14.025 11544.418 - 11594.831: 86.3594% ( 7) 00:07:14.025 11594.831 - 11645.243: 86.4386% ( 12) 00:07:14.025 11645.243 - 11695.655: 86.4781% ( 6) 00:07:14.025 11695.655 - 11746.068: 86.5770% ( 15) 00:07:14.025 11746.068 - 11796.480: 86.6232% ( 7) 00:07:14.025 11796.480 - 11846.892: 86.7089% ( 13) 00:07:14.025 11846.892 - 11897.305: 86.7682% ( 9) 00:07:14.025 11897.305 - 11947.717: 86.8275% ( 9) 00:07:14.025 11947.717 - 11998.129: 86.8737% ( 7) 00:07:14.025 11998.129 - 12048.542: 86.9396% ( 10) 00:07:14.025 12048.542 - 12098.954: 86.9989% ( 9) 00:07:14.025 12098.954 - 12149.366: 87.0781% ( 12) 00:07:14.025 12149.366 - 12199.778: 87.1440% ( 10) 00:07:14.025 12199.778 - 12250.191: 87.2165% ( 11) 00:07:14.025 12250.191 - 12300.603: 87.2693% ( 8) 00:07:14.025 12300.603 - 12351.015: 87.3220% ( 8) 00:07:14.025 12351.015 - 12401.428: 87.3681% ( 7) 00:07:14.025 12401.428 - 12451.840: 87.4143% ( 7) 00:07:14.025 12451.840 - 12502.252: 87.4604% ( 7) 00:07:14.025 12502.252 - 12552.665: 87.5132% ( 8) 00:07:14.025 12552.665 - 12603.077: 87.5659% ( 8) 00:07:14.025 12603.077 - 12653.489: 87.6055% ( 6) 00:07:14.025 12653.489 - 12703.902: 87.6516% ( 7) 00:07:14.025 12703.902 - 12754.314: 87.7044% ( 8) 00:07:14.025 12754.314 - 12804.726: 87.7835% ( 12) 00:07:14.025 12804.726 - 12855.138: 87.8362% ( 8) 00:07:14.025 12855.138 - 12905.551: 87.8824% ( 7) 00:07:14.025 12905.551 - 13006.375: 87.9615% ( 12) 00:07:14.025 13006.375 - 13107.200: 88.0538% ( 14) 00:07:14.025 13107.200 - 13208.025: 88.1725% ( 18) 00:07:14.025 13208.025 - 13308.849: 88.2977% ( 19) 00:07:14.025 13308.849 - 13409.674: 88.4230% ( 19) 00:07:14.025 13409.674 - 13510.498: 88.5680% ( 22) 00:07:14.025 13510.498 - 13611.323: 88.6603% ( 14) 00:07:14.025 13611.323 - 13712.148: 88.7790% ( 18) 00:07:14.025 13712.148 - 13812.972: 88.9043% ( 19) 00:07:14.025 13812.972 - 13913.797: 89.0493% ( 22) 00:07:14.025 13913.797 - 14014.622: 89.1878% ( 21) 00:07:14.025 14014.622 - 14115.446: 89.3328% ( 22) 00:07:14.025 14115.446 - 14216.271: 89.4778% ( 22) 00:07:14.025 14216.271 - 14317.095: 89.6559% ( 27) 00:07:14.025 14317.095 - 14417.920: 89.7811% ( 19) 00:07:14.025 14417.920 - 14518.745: 89.8668% ( 13) 00:07:14.025 14518.745 - 14619.569: 90.0382% ( 26) 00:07:14.025 14619.569 - 14720.394: 90.1635% ( 19) 00:07:14.025 14720.394 - 14821.218: 90.2822% ( 18) 00:07:14.025 14821.218 - 14922.043: 90.4668% ( 28) 00:07:14.025 14922.043 - 15022.868: 90.6184% ( 23) 00:07:14.025 15022.868 - 15123.692: 90.7503% ( 20) 00:07:14.025 15123.692 - 15224.517: 90.8887% ( 21) 00:07:14.025 15224.517 - 15325.342: 91.0799% ( 29) 00:07:14.025 15325.342 - 15426.166: 91.2777% ( 30) 00:07:14.025 15426.166 - 15526.991: 91.4030% ( 19) 00:07:14.025 15526.991 - 15627.815: 91.5612% ( 24) 00:07:14.025 15627.815 - 15728.640: 91.6667% ( 16) 00:07:14.025 15728.640 - 15829.465: 91.7524% ( 13) 00:07:14.025 15829.465 - 15930.289: 91.8381% ( 13) 00:07:14.025 15930.289 - 16031.114: 92.0029% ( 25) 00:07:14.025 16031.114 - 16131.938: 92.0952% ( 14) 00:07:14.025 16131.938 - 16232.763: 92.1875% ( 14) 00:07:14.025 16232.763 - 16333.588: 92.2798% ( 14) 00:07:14.025 16333.588 - 16434.412: 92.3457% ( 10) 00:07:14.025 16434.412 - 16535.237: 92.4314% ( 13) 00:07:14.025 16535.237 - 16636.062: 92.5567% ( 19) 00:07:14.025 16636.062 - 16736.886: 92.6424% ( 13) 00:07:14.025 16736.886 - 16837.711: 92.7479% ( 16) 00:07:14.025 16837.711 - 16938.535: 92.8534% ( 16) 00:07:14.025 16938.535 - 17039.360: 93.0841% ( 35) 00:07:14.025 17039.360 - 17140.185: 93.2226% ( 21) 00:07:14.025 17140.185 - 17241.009: 93.3874% ( 25) 00:07:14.025 17241.009 - 17341.834: 93.4863% ( 15) 00:07:14.025 17341.834 - 17442.658: 93.6709% ( 28) 00:07:14.025 17442.658 - 17543.483: 93.8423% ( 26) 00:07:14.025 17543.483 - 17644.308: 94.0467% ( 31) 00:07:14.025 17644.308 - 17745.132: 94.2247% ( 27) 00:07:14.025 17745.132 - 17845.957: 94.4818% ( 39) 00:07:14.025 17845.957 - 17946.782: 94.7257% ( 37) 00:07:14.025 17946.782 - 18047.606: 94.9565% ( 35) 00:07:14.025 18047.606 - 18148.431: 95.2598% ( 46) 00:07:14.025 18148.431 - 18249.255: 95.5037% ( 37) 00:07:14.025 18249.255 - 18350.080: 95.8993% ( 60) 00:07:14.025 18350.080 - 18450.905: 96.1300% ( 35) 00:07:14.025 18450.905 - 18551.729: 96.3410% ( 32) 00:07:14.025 18551.729 - 18652.554: 96.6377% ( 45) 00:07:14.025 18652.554 - 18753.378: 96.8420% ( 31) 00:07:14.025 18753.378 - 18854.203: 97.0860% ( 37) 00:07:14.025 18854.203 - 18955.028: 97.2508% ( 25) 00:07:14.025 18955.028 - 19055.852: 97.3431% ( 14) 00:07:14.025 19055.852 - 19156.677: 97.4749% ( 20) 00:07:14.025 19156.677 - 19257.502: 97.5738% ( 15) 00:07:14.025 19257.502 - 19358.326: 97.6727% ( 15) 00:07:14.025 19358.326 - 19459.151: 97.7255% ( 8) 00:07:14.025 19459.151 - 19559.975: 97.7716% ( 7) 00:07:14.025 19559.975 - 19660.800: 97.8507% ( 12) 00:07:14.025 19660.800 - 19761.625: 97.9035% ( 8) 00:07:14.025 19761.625 - 19862.449: 97.9694% ( 10) 00:07:14.025 19862.449 - 19963.274: 97.9826% ( 2) 00:07:14.025 19963.274 - 20064.098: 98.0353% ( 8) 00:07:14.025 20064.098 - 20164.923: 98.1276% ( 14) 00:07:14.025 20164.923 - 20265.748: 98.1738% ( 7) 00:07:14.025 20265.748 - 20366.572: 98.2463% ( 11) 00:07:14.025 20366.572 - 20467.397: 98.3188% ( 11) 00:07:14.025 20467.397 - 20568.222: 98.3716% ( 8) 00:07:14.025 20568.222 - 20669.046: 98.4309% ( 9) 00:07:14.025 20669.046 - 20769.871: 98.4705% ( 6) 00:07:14.025 20769.871 - 20870.695: 98.5562% ( 13) 00:07:14.025 20870.695 - 20971.520: 98.6419% ( 13) 00:07:14.025 20971.520 - 21072.345: 98.7210% ( 12) 00:07:14.025 21072.345 - 21173.169: 98.8001% ( 12) 00:07:14.025 21173.169 - 21273.994: 98.8990% ( 15) 00:07:14.025 21273.994 - 21374.818: 99.0045% ( 16) 00:07:14.025 21374.818 - 21475.643: 99.0770% ( 11) 00:07:14.025 21475.643 - 21576.468: 99.1495% ( 11) 00:07:14.025 21576.468 - 21677.292: 99.2023% ( 8) 00:07:14.025 21677.292 - 21778.117: 99.3078% ( 16) 00:07:14.025 21778.117 - 21878.942: 99.3605% ( 8) 00:07:14.025 21878.942 - 21979.766: 99.4066% ( 7) 00:07:14.025 21979.766 - 22080.591: 99.4462% ( 6) 00:07:14.025 22080.591 - 22181.415: 99.4726% ( 4) 00:07:14.025 22181.415 - 22282.240: 99.4924% ( 3) 00:07:14.025 22282.240 - 22383.065: 99.5253% ( 5) 00:07:14.025 22383.065 - 22483.889: 99.5451% ( 3) 00:07:14.025 22483.889 - 22584.714: 99.5649% ( 3) 00:07:14.025 22584.714 - 22685.538: 99.5781% ( 2) 00:07:14.025 26214.400 - 26416.049: 99.5847% ( 1) 00:07:14.025 26416.049 - 26617.698: 99.6242% ( 6) 00:07:14.025 26617.698 - 26819.348: 99.6704% ( 7) 00:07:14.025 26819.348 - 27020.997: 99.7231% ( 8) 00:07:14.025 27020.997 - 27222.646: 99.7627% ( 6) 00:07:14.025 27222.646 - 27424.295: 99.8154% ( 8) 00:07:14.025 27424.295 - 27625.945: 99.8681% ( 8) 00:07:14.025 27625.945 - 27827.594: 99.9143% ( 7) 00:07:14.025 27827.594 - 28029.243: 99.9670% ( 8) 00:07:14.025 28029.243 - 28230.892: 100.0000% ( 5) 00:07:14.025 00:07:14.025 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:14.025 ============================================================================== 00:07:14.025 Range in us Cumulative IO count 00:07:14.025 5898.240 - 5923.446: 0.0066% ( 1) 00:07:14.025 5923.446 - 5948.652: 0.0396% ( 5) 00:07:14.025 5948.652 - 5973.858: 0.0659% ( 4) 00:07:14.025 5973.858 - 5999.065: 0.1121% ( 7) 00:07:14.025 5999.065 - 6024.271: 0.1714% ( 9) 00:07:14.025 6024.271 - 6049.477: 0.2901% ( 18) 00:07:14.025 6049.477 - 6074.683: 0.3626% ( 11) 00:07:14.025 6074.683 - 6099.889: 0.4945% ( 20) 00:07:14.025 6099.889 - 6125.095: 0.6791% ( 28) 00:07:14.025 6125.095 - 6150.302: 0.8703% ( 29) 00:07:14.025 6150.302 - 6175.508: 1.1340% ( 40) 00:07:14.025 6175.508 - 6200.714: 1.5361% ( 61) 00:07:14.025 6200.714 - 6225.920: 2.0965% ( 85) 00:07:14.025 6225.920 - 6251.126: 3.0459% ( 144) 00:07:14.025 6251.126 - 6276.332: 4.3051% ( 191) 00:07:14.025 6276.332 - 6301.538: 5.7687% ( 222) 00:07:14.025 6301.538 - 6326.745: 7.1994% ( 217) 00:07:14.025 6326.745 - 6351.951: 8.7816% ( 240) 00:07:14.025 6351.951 - 6377.157: 10.4562% ( 254) 00:07:14.025 6377.157 - 6402.363: 12.0649% ( 244) 00:07:14.025 6402.363 - 6427.569: 13.7592% ( 257) 00:07:14.026 6427.569 - 6452.775: 15.4338% ( 254) 00:07:14.026 6452.775 - 6503.188: 18.6116% ( 482) 00:07:14.026 6503.188 - 6553.600: 21.9409% ( 505) 00:07:14.026 6553.600 - 6604.012: 25.2967% ( 509) 00:07:14.026 6604.012 - 6654.425: 28.8107% ( 533) 00:07:14.026 6654.425 - 6704.837: 32.2587% ( 523) 00:07:14.026 6704.837 - 6755.249: 35.8122% ( 539) 00:07:14.026 6755.249 - 6805.662: 39.4515% ( 552) 00:07:14.026 6805.662 - 6856.074: 42.9720% ( 534) 00:07:14.026 6856.074 - 6906.486: 46.4662% ( 530) 00:07:14.026 6906.486 - 6956.898: 50.0198% ( 539) 00:07:14.026 6956.898 - 7007.311: 53.4942% ( 527) 00:07:14.026 7007.311 - 7057.723: 56.8697% ( 512) 00:07:14.026 7057.723 - 7108.135: 60.1464% ( 497) 00:07:14.026 7108.135 - 7158.548: 62.9351% ( 423) 00:07:14.026 7158.548 - 7208.960: 65.0646% ( 323) 00:07:14.026 7208.960 - 7259.372: 66.3964% ( 202) 00:07:14.026 7259.372 - 7309.785: 67.2864% ( 135) 00:07:14.026 7309.785 - 7360.197: 68.0578% ( 117) 00:07:14.026 7360.197 - 7410.609: 68.7170% ( 100) 00:07:14.026 7410.609 - 7461.022: 69.3104% ( 90) 00:07:14.026 7461.022 - 7511.434: 69.9037% ( 90) 00:07:14.026 7511.434 - 7561.846: 70.4971% ( 90) 00:07:14.026 7561.846 - 7612.258: 71.0377% ( 82) 00:07:14.026 7612.258 - 7662.671: 71.6047% ( 86) 00:07:14.026 7662.671 - 7713.083: 72.1189% ( 78) 00:07:14.026 7713.083 - 7763.495: 72.5672% ( 68) 00:07:14.026 7763.495 - 7813.908: 73.0683% ( 76) 00:07:14.026 7813.908 - 7864.320: 73.5364% ( 71) 00:07:14.026 7864.320 - 7914.732: 73.9517% ( 63) 00:07:14.026 7914.732 - 7965.145: 74.4198% ( 71) 00:07:14.026 7965.145 - 8015.557: 74.7693% ( 53) 00:07:14.026 8015.557 - 8065.969: 75.2176% ( 68) 00:07:14.026 8065.969 - 8116.382: 75.5868% ( 56) 00:07:14.026 8116.382 - 8166.794: 76.0153% ( 65) 00:07:14.026 8166.794 - 8217.206: 76.3713% ( 54) 00:07:14.026 8217.206 - 8267.618: 76.7669% ( 60) 00:07:14.026 8267.618 - 8318.031: 77.1163% ( 53) 00:07:14.026 8318.031 - 8368.443: 77.5053% ( 59) 00:07:14.026 8368.443 - 8418.855: 77.8613% ( 54) 00:07:14.026 8418.855 - 8469.268: 78.1777% ( 48) 00:07:14.026 8469.268 - 8519.680: 78.5140% ( 51) 00:07:14.026 8519.680 - 8570.092: 78.8041% ( 44) 00:07:14.026 8570.092 - 8620.505: 79.0612% ( 39) 00:07:14.026 8620.505 - 8670.917: 79.2985% ( 36) 00:07:14.026 8670.917 - 8721.329: 79.5227% ( 34) 00:07:14.026 8721.329 - 8771.742: 79.7205% ( 30) 00:07:14.026 8771.742 - 8822.154: 79.8985% ( 27) 00:07:14.026 8822.154 - 8872.566: 80.0633% ( 25) 00:07:14.026 8872.566 - 8922.978: 80.2347% ( 26) 00:07:14.026 8922.978 - 8973.391: 80.3929% ( 24) 00:07:14.026 8973.391 - 9023.803: 80.5248% ( 20) 00:07:14.026 9023.803 - 9074.215: 80.6566% ( 20) 00:07:14.026 9074.215 - 9124.628: 80.8940% ( 36) 00:07:14.026 9124.628 - 9175.040: 81.0918% ( 30) 00:07:14.026 9175.040 - 9225.452: 81.3225% ( 35) 00:07:14.026 9225.452 - 9275.865: 81.5401% ( 33) 00:07:14.026 9275.865 - 9326.277: 81.7642% ( 34) 00:07:14.026 9326.277 - 9376.689: 81.9357% ( 26) 00:07:14.026 9376.689 - 9427.102: 82.1466% ( 32) 00:07:14.026 9427.102 - 9477.514: 82.3312% ( 28) 00:07:14.026 9477.514 - 9527.926: 82.5290% ( 30) 00:07:14.026 9527.926 - 9578.338: 82.7070% ( 27) 00:07:14.026 9578.338 - 9628.751: 82.8652% ( 24) 00:07:14.026 9628.751 - 9679.163: 83.0367% ( 26) 00:07:14.026 9679.163 - 9729.575: 83.2015% ( 25) 00:07:14.026 9729.575 - 9779.988: 83.3531% ( 23) 00:07:14.026 9779.988 - 9830.400: 83.4850% ( 20) 00:07:14.026 9830.400 - 9880.812: 83.6036% ( 18) 00:07:14.026 9880.812 - 9931.225: 83.7223% ( 18) 00:07:14.026 9931.225 - 9981.637: 83.8608% ( 21) 00:07:14.026 9981.637 - 10032.049: 84.0190% ( 24) 00:07:14.026 10032.049 - 10082.462: 84.1574% ( 21) 00:07:14.026 10082.462 - 10132.874: 84.2431% ( 13) 00:07:14.026 10132.874 - 10183.286: 84.3157% ( 11) 00:07:14.026 10183.286 - 10233.698: 84.3618% ( 7) 00:07:14.026 10233.698 - 10284.111: 84.4080% ( 7) 00:07:14.026 10284.111 - 10334.523: 84.4475% ( 6) 00:07:14.026 10334.523 - 10384.935: 84.4937% ( 7) 00:07:14.026 10384.935 - 10435.348: 84.5398% ( 7) 00:07:14.026 10435.348 - 10485.760: 84.5728% ( 5) 00:07:14.026 10485.760 - 10536.172: 84.6123% ( 6) 00:07:14.026 10536.172 - 10586.585: 84.6980% ( 13) 00:07:14.026 10586.585 - 10636.997: 84.7640% ( 10) 00:07:14.026 10636.997 - 10687.409: 84.8167% ( 8) 00:07:14.026 10687.409 - 10737.822: 84.8958% ( 12) 00:07:14.026 10737.822 - 10788.234: 85.0277% ( 20) 00:07:14.026 10788.234 - 10838.646: 85.1068% ( 12) 00:07:14.026 10838.646 - 10889.058: 85.1859% ( 12) 00:07:14.026 10889.058 - 10939.471: 85.2716% ( 13) 00:07:14.026 10939.471 - 10989.883: 85.3507% ( 12) 00:07:14.026 10989.883 - 11040.295: 85.4430% ( 14) 00:07:14.026 11040.295 - 11090.708: 85.5222% ( 12) 00:07:14.026 11090.708 - 11141.120: 85.5947% ( 11) 00:07:14.026 11141.120 - 11191.532: 85.6870% ( 14) 00:07:14.026 11191.532 - 11241.945: 85.7793% ( 14) 00:07:14.026 11241.945 - 11292.357: 85.8782% ( 15) 00:07:14.026 11292.357 - 11342.769: 85.9771% ( 15) 00:07:14.026 11342.769 - 11393.182: 86.0957% ( 18) 00:07:14.026 11393.182 - 11443.594: 86.2144% ( 18) 00:07:14.026 11443.594 - 11494.006: 86.3463% ( 20) 00:07:14.026 11494.006 - 11544.418: 86.4913% ( 22) 00:07:14.026 11544.418 - 11594.831: 86.6034% ( 17) 00:07:14.026 11594.831 - 11645.243: 86.7089% ( 16) 00:07:14.026 11645.243 - 11695.655: 86.8209% ( 17) 00:07:14.026 11695.655 - 11746.068: 86.9198% ( 15) 00:07:14.026 11746.068 - 11796.480: 87.0187% ( 15) 00:07:14.026 11796.480 - 11846.892: 87.0912% ( 11) 00:07:14.026 11846.892 - 11897.305: 87.1572% ( 10) 00:07:14.026 11897.305 - 11947.717: 87.2165% ( 9) 00:07:14.026 11947.717 - 11998.129: 87.2693% ( 8) 00:07:14.026 11998.129 - 12048.542: 87.3286% ( 9) 00:07:14.026 12048.542 - 12098.954: 87.3879% ( 9) 00:07:14.026 12098.954 - 12149.366: 87.4539% ( 10) 00:07:14.026 12149.366 - 12199.778: 87.5066% ( 8) 00:07:14.026 12199.778 - 12250.191: 87.5593% ( 8) 00:07:14.026 12250.191 - 12300.603: 87.5989% ( 6) 00:07:14.026 12300.603 - 12351.015: 87.6319% ( 5) 00:07:14.026 12351.015 - 12401.428: 87.6648% ( 5) 00:07:14.026 12401.428 - 12451.840: 87.6912% ( 4) 00:07:14.026 12451.840 - 12502.252: 87.7242% ( 5) 00:07:14.026 12502.252 - 12552.665: 87.7769% ( 8) 00:07:14.026 12552.665 - 12603.077: 87.8033% ( 4) 00:07:14.026 12603.077 - 12653.489: 87.8362% ( 5) 00:07:14.026 12653.489 - 12703.902: 87.8626% ( 4) 00:07:14.026 12703.902 - 12754.314: 87.8824% ( 3) 00:07:14.026 12754.314 - 12804.726: 87.9022% ( 3) 00:07:14.026 12804.726 - 12855.138: 87.9351% ( 5) 00:07:14.026 12855.138 - 12905.551: 87.9747% ( 6) 00:07:14.026 12905.551 - 13006.375: 88.0472% ( 11) 00:07:14.026 13006.375 - 13107.200: 88.1329% ( 13) 00:07:14.026 13107.200 - 13208.025: 88.2252% ( 14) 00:07:14.026 13208.025 - 13308.849: 88.3175% ( 14) 00:07:14.026 13308.849 - 13409.674: 88.4428% ( 19) 00:07:14.026 13409.674 - 13510.498: 88.6010% ( 24) 00:07:14.026 13510.498 - 13611.323: 88.7988% ( 30) 00:07:14.026 13611.323 - 13712.148: 88.9966% ( 30) 00:07:14.026 13712.148 - 13812.972: 89.1746% ( 27) 00:07:14.026 13812.972 - 13913.797: 89.3460% ( 26) 00:07:14.026 13913.797 - 14014.622: 89.4778% ( 20) 00:07:14.026 14014.622 - 14115.446: 89.6229% ( 22) 00:07:14.026 14115.446 - 14216.271: 89.7679% ( 22) 00:07:14.026 14216.271 - 14317.095: 89.9130% ( 22) 00:07:14.026 14317.095 - 14417.920: 90.0778% ( 25) 00:07:14.026 14417.920 - 14518.745: 90.2162% ( 21) 00:07:14.026 14518.745 - 14619.569: 90.3217% ( 16) 00:07:14.026 14619.569 - 14720.394: 90.4008% ( 12) 00:07:14.026 14720.394 - 14821.218: 90.4668% ( 10) 00:07:14.026 14821.218 - 14922.043: 90.5129% ( 7) 00:07:14.026 14922.043 - 15022.868: 90.5525% ( 6) 00:07:14.026 15022.868 - 15123.692: 90.6118% ( 9) 00:07:14.026 15123.692 - 15224.517: 90.6909% ( 12) 00:07:14.026 15224.517 - 15325.342: 90.7832% ( 14) 00:07:14.026 15325.342 - 15426.166: 90.9085% ( 19) 00:07:14.026 15426.166 - 15526.991: 91.0140% ( 16) 00:07:14.026 15526.991 - 15627.815: 91.1920% ( 27) 00:07:14.026 15627.815 - 15728.640: 91.3107% ( 18) 00:07:14.026 15728.640 - 15829.465: 91.4030% ( 14) 00:07:14.026 15829.465 - 15930.289: 91.5216% ( 18) 00:07:14.026 15930.289 - 16031.114: 91.6469% ( 19) 00:07:14.026 16031.114 - 16131.938: 91.8447% ( 30) 00:07:14.026 16131.938 - 16232.763: 91.9963% ( 23) 00:07:14.026 16232.763 - 16333.588: 92.1414% ( 22) 00:07:14.026 16333.588 - 16434.412: 92.2996% ( 24) 00:07:14.026 16434.412 - 16535.237: 92.4446% ( 22) 00:07:14.026 16535.237 - 16636.062: 92.5237% ( 12) 00:07:14.026 16636.062 - 16736.886: 92.6160% ( 14) 00:07:14.026 16736.886 - 16837.711: 92.7611% ( 22) 00:07:14.026 16837.711 - 16938.535: 92.8929% ( 20) 00:07:14.026 16938.535 - 17039.360: 93.0512% ( 24) 00:07:14.026 17039.360 - 17140.185: 93.1435% ( 14) 00:07:14.026 17140.185 - 17241.009: 93.2358% ( 14) 00:07:14.026 17241.009 - 17341.834: 93.3149% ( 12) 00:07:14.026 17341.834 - 17442.658: 93.3940% ( 12) 00:07:14.026 17442.658 - 17543.483: 93.4731% ( 12) 00:07:14.026 17543.483 - 17644.308: 93.5588% ( 13) 00:07:14.026 17644.308 - 17745.132: 93.7698% ( 32) 00:07:14.026 17745.132 - 17845.957: 94.0071% ( 36) 00:07:14.026 17845.957 - 17946.782: 94.2774% ( 41) 00:07:14.026 17946.782 - 18047.606: 94.5082% ( 35) 00:07:14.026 18047.606 - 18148.431: 94.8114% ( 46) 00:07:14.026 18148.431 - 18249.255: 95.0949% ( 43) 00:07:14.026 18249.255 - 18350.080: 95.4246% ( 50) 00:07:14.026 18350.080 - 18450.905: 95.7278% ( 46) 00:07:14.026 18450.905 - 18551.729: 96.0575% ( 50) 00:07:14.026 18551.729 - 18652.554: 96.3542% ( 45) 00:07:14.026 18652.554 - 18753.378: 96.6113% ( 39) 00:07:14.026 18753.378 - 18854.203: 96.8025% ( 29) 00:07:14.026 18854.203 - 18955.028: 96.9211% ( 18) 00:07:14.026 18955.028 - 19055.852: 97.0398% ( 18) 00:07:14.026 19055.852 - 19156.677: 97.1519% ( 17) 00:07:14.026 19156.677 - 19257.502: 97.2903% ( 21) 00:07:14.026 19257.502 - 19358.326: 97.4222% ( 20) 00:07:14.026 19358.326 - 19459.151: 97.5343% ( 17) 00:07:14.026 19459.151 - 19559.975: 97.6332% ( 15) 00:07:14.026 19559.975 - 19660.800: 97.7189% ( 13) 00:07:14.026 19660.800 - 19761.625: 97.8244% ( 16) 00:07:14.026 19761.625 - 19862.449: 97.9364% ( 17) 00:07:14.026 19862.449 - 19963.274: 98.0353% ( 15) 00:07:14.026 19963.274 - 20064.098: 98.1474% ( 17) 00:07:14.026 20064.098 - 20164.923: 98.2727% ( 19) 00:07:14.027 20164.923 - 20265.748: 98.3914% ( 18) 00:07:14.027 20265.748 - 20366.572: 98.5034% ( 17) 00:07:14.027 20366.572 - 20467.397: 98.6155% ( 17) 00:07:14.027 20467.397 - 20568.222: 98.7474% ( 20) 00:07:14.027 20568.222 - 20669.046: 98.8528% ( 16) 00:07:14.027 20669.046 - 20769.871: 98.9715% ( 18) 00:07:14.027 20769.871 - 20870.695: 99.0836% ( 17) 00:07:14.027 20870.695 - 20971.520: 99.1891% ( 16) 00:07:14.027 20971.520 - 21072.345: 99.3012% ( 17) 00:07:14.027 21072.345 - 21173.169: 99.4001% ( 15) 00:07:14.027 21173.169 - 21273.994: 99.5055% ( 16) 00:07:14.027 21273.994 - 21374.818: 99.5649% ( 9) 00:07:14.027 21374.818 - 21475.643: 99.5781% ( 2) 00:07:14.027 24601.206 - 24702.031: 99.5912% ( 2) 00:07:14.027 24702.031 - 24802.855: 99.6176% ( 4) 00:07:14.027 24802.855 - 24903.680: 99.6440% ( 4) 00:07:14.027 24903.680 - 25004.505: 99.6704% ( 4) 00:07:14.027 25004.505 - 25105.329: 99.6901% ( 3) 00:07:14.027 25105.329 - 25206.154: 99.7165% ( 4) 00:07:14.027 25206.154 - 25306.978: 99.7495% ( 5) 00:07:14.027 25306.978 - 25407.803: 99.7693% ( 3) 00:07:14.027 25407.803 - 25508.628: 99.7956% ( 4) 00:07:14.027 25508.628 - 25609.452: 99.8220% ( 4) 00:07:14.027 25609.452 - 25710.277: 99.8484% ( 4) 00:07:14.027 25710.277 - 25811.102: 99.8747% ( 4) 00:07:14.027 25811.102 - 26012.751: 99.9275% ( 8) 00:07:14.027 26012.751 - 26214.400: 99.9802% ( 8) 00:07:14.027 26214.400 - 26416.049: 100.0000% ( 3) 00:07:14.027 00:07:14.027 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:14.027 ============================================================================== 00:07:14.027 Range in us Cumulative IO count 00:07:14.027 5822.622 - 5847.828: 0.0132% ( 2) 00:07:14.027 5847.828 - 5873.034: 0.0198% ( 1) 00:07:14.027 5873.034 - 5898.240: 0.0330% ( 2) 00:07:14.027 5898.240 - 5923.446: 0.0461% ( 2) 00:07:14.027 5923.446 - 5948.652: 0.0593% ( 2) 00:07:14.027 5948.652 - 5973.858: 0.0725% ( 2) 00:07:14.027 5973.858 - 5999.065: 0.1121% ( 6) 00:07:14.027 5999.065 - 6024.271: 0.1648% ( 8) 00:07:14.027 6024.271 - 6049.477: 0.2307% ( 10) 00:07:14.027 6049.477 - 6074.683: 0.3099% ( 12) 00:07:14.027 6074.683 - 6099.889: 0.4088% ( 15) 00:07:14.027 6099.889 - 6125.095: 0.5604% ( 23) 00:07:14.027 6125.095 - 6150.302: 0.8834% ( 49) 00:07:14.027 6150.302 - 6175.508: 1.2395% ( 54) 00:07:14.027 6175.508 - 6200.714: 1.6482% ( 62) 00:07:14.027 6200.714 - 6225.920: 2.3273% ( 103) 00:07:14.027 6225.920 - 6251.126: 3.2371% ( 138) 00:07:14.027 6251.126 - 6276.332: 4.5029% ( 192) 00:07:14.027 6276.332 - 6301.538: 5.9401% ( 218) 00:07:14.027 6301.538 - 6326.745: 7.3972% ( 221) 00:07:14.027 6326.745 - 6351.951: 8.9926% ( 242) 00:07:14.027 6351.951 - 6377.157: 10.6079% ( 245) 00:07:14.027 6377.157 - 6402.363: 12.1770% ( 238) 00:07:14.027 6402.363 - 6427.569: 13.7526% ( 239) 00:07:14.027 6427.569 - 6452.775: 15.4140% ( 252) 00:07:14.027 6452.775 - 6503.188: 18.7368% ( 504) 00:07:14.027 6503.188 - 6553.600: 22.1255% ( 514) 00:07:14.027 6553.600 - 6604.012: 25.6461% ( 534) 00:07:14.027 6604.012 - 6654.425: 29.1073% ( 525) 00:07:14.027 6654.425 - 6704.837: 32.6609% ( 539) 00:07:14.027 6704.837 - 6755.249: 36.2210% ( 540) 00:07:14.027 6755.249 - 6805.662: 39.7745% ( 539) 00:07:14.027 6805.662 - 6856.074: 43.2292% ( 524) 00:07:14.027 6856.074 - 6906.486: 46.7234% ( 530) 00:07:14.027 6906.486 - 6956.898: 50.1582% ( 521) 00:07:14.027 6956.898 - 7007.311: 53.5799% ( 519) 00:07:14.027 7007.311 - 7057.723: 57.0148% ( 521) 00:07:14.027 7057.723 - 7108.135: 60.1134% ( 470) 00:07:14.027 7108.135 - 7158.548: 63.0011% ( 438) 00:07:14.027 7158.548 - 7208.960: 65.2690% ( 344) 00:07:14.027 7208.960 - 7259.372: 66.6601% ( 211) 00:07:14.027 7259.372 - 7309.785: 67.6424% ( 149) 00:07:14.027 7309.785 - 7360.197: 68.4072% ( 116) 00:07:14.027 7360.197 - 7410.609: 69.1258% ( 109) 00:07:14.027 7410.609 - 7461.022: 69.7983% ( 102) 00:07:14.027 7461.022 - 7511.434: 70.4246% ( 95) 00:07:14.027 7511.434 - 7561.846: 71.0047% ( 88) 00:07:14.027 7561.846 - 7612.258: 71.6179% ( 93) 00:07:14.027 7612.258 - 7662.671: 72.1255% ( 77) 00:07:14.027 7662.671 - 7713.083: 72.5936% ( 71) 00:07:14.027 7713.083 - 7763.495: 73.0617% ( 71) 00:07:14.027 7763.495 - 7813.908: 73.5430% ( 73) 00:07:14.027 7813.908 - 7864.320: 74.0374% ( 75) 00:07:14.027 7864.320 - 7914.732: 74.5385% ( 76) 00:07:14.027 7914.732 - 7965.145: 74.9670% ( 65) 00:07:14.027 7965.145 - 8015.557: 75.3165% ( 53) 00:07:14.027 8015.557 - 8065.969: 75.6988% ( 58) 00:07:14.027 8065.969 - 8116.382: 76.0680% ( 56) 00:07:14.027 8116.382 - 8166.794: 76.4306% ( 55) 00:07:14.027 8166.794 - 8217.206: 76.7801% ( 53) 00:07:14.027 8217.206 - 8267.618: 77.1097% ( 50) 00:07:14.027 8267.618 - 8318.031: 77.4789% ( 56) 00:07:14.027 8318.031 - 8368.443: 77.8283% ( 53) 00:07:14.027 8368.443 - 8418.855: 78.1118% ( 43) 00:07:14.027 8418.855 - 8469.268: 78.3689% ( 39) 00:07:14.027 8469.268 - 8519.680: 78.5931% ( 34) 00:07:14.027 8519.680 - 8570.092: 78.7711% ( 27) 00:07:14.027 8570.092 - 8620.505: 78.9689% ( 30) 00:07:14.027 8620.505 - 8670.917: 79.1337% ( 25) 00:07:14.027 8670.917 - 8721.329: 79.3249% ( 29) 00:07:14.027 8721.329 - 8771.742: 79.4502% ( 19) 00:07:14.027 8771.742 - 8822.154: 79.5688% ( 18) 00:07:14.027 8822.154 - 8872.566: 79.6809% ( 17) 00:07:14.027 8872.566 - 8922.978: 79.7930% ( 17) 00:07:14.027 8922.978 - 8973.391: 79.9446% ( 23) 00:07:14.027 8973.391 - 9023.803: 80.1160% ( 26) 00:07:14.027 9023.803 - 9074.215: 80.2743% ( 24) 00:07:14.027 9074.215 - 9124.628: 80.4391% ( 25) 00:07:14.027 9124.628 - 9175.040: 80.5907% ( 23) 00:07:14.027 9175.040 - 9225.452: 80.7555% ( 25) 00:07:14.027 9225.452 - 9275.865: 80.9335% ( 27) 00:07:14.027 9275.865 - 9326.277: 81.0852% ( 23) 00:07:14.027 9326.277 - 9376.689: 81.2500% ( 25) 00:07:14.027 9376.689 - 9427.102: 81.4148% ( 25) 00:07:14.027 9427.102 - 9477.514: 81.5862% ( 26) 00:07:14.027 9477.514 - 9527.926: 81.7708% ( 28) 00:07:14.027 9527.926 - 9578.338: 81.9488% ( 27) 00:07:14.027 9578.338 - 9628.751: 82.1268% ( 27) 00:07:14.027 9628.751 - 9679.163: 82.2917% ( 25) 00:07:14.027 9679.163 - 9729.575: 82.4565% ( 25) 00:07:14.027 9729.575 - 9779.988: 82.6147% ( 24) 00:07:14.027 9779.988 - 9830.400: 82.8125% ( 30) 00:07:14.027 9830.400 - 9880.812: 82.9971% ( 28) 00:07:14.027 9880.812 - 9931.225: 83.1290% ( 20) 00:07:14.027 9931.225 - 9981.637: 83.2740% ( 22) 00:07:14.027 9981.637 - 10032.049: 83.3927% ( 18) 00:07:14.027 10032.049 - 10082.462: 83.5311% ( 21) 00:07:14.027 10082.462 - 10132.874: 83.6696% ( 21) 00:07:14.027 10132.874 - 10183.286: 83.8278% ( 24) 00:07:14.027 10183.286 - 10233.698: 83.9662% ( 21) 00:07:14.027 10233.698 - 10284.111: 84.0915% ( 19) 00:07:14.027 10284.111 - 10334.523: 84.2036% ( 17) 00:07:14.027 10334.523 - 10384.935: 84.3157% ( 17) 00:07:14.027 10384.935 - 10435.348: 84.4475% ( 20) 00:07:14.027 10435.348 - 10485.760: 84.5728% ( 19) 00:07:14.027 10485.760 - 10536.172: 84.6783% ( 16) 00:07:14.027 10536.172 - 10586.585: 84.7838% ( 16) 00:07:14.027 10586.585 - 10636.997: 84.8826% ( 15) 00:07:14.027 10636.997 - 10687.409: 84.9815% ( 15) 00:07:14.027 10687.409 - 10737.822: 85.1068% ( 19) 00:07:14.027 10737.822 - 10788.234: 85.2057% ( 15) 00:07:14.027 10788.234 - 10838.646: 85.3178% ( 17) 00:07:14.027 10838.646 - 10889.058: 85.4035% ( 13) 00:07:14.027 10889.058 - 10939.471: 85.5024% ( 15) 00:07:14.027 10939.471 - 10989.883: 85.5881% ( 13) 00:07:14.027 10989.883 - 11040.295: 85.6672% ( 12) 00:07:14.027 11040.295 - 11090.708: 85.7529% ( 13) 00:07:14.027 11090.708 - 11141.120: 85.8254% ( 11) 00:07:14.027 11141.120 - 11191.532: 85.8914% ( 10) 00:07:14.027 11191.532 - 11241.945: 85.9771% ( 13) 00:07:14.027 11241.945 - 11292.357: 86.0628% ( 13) 00:07:14.027 11292.357 - 11342.769: 86.1287% ( 10) 00:07:14.027 11342.769 - 11393.182: 86.2342% ( 16) 00:07:14.027 11393.182 - 11443.594: 86.3199% ( 13) 00:07:14.027 11443.594 - 11494.006: 86.3924% ( 11) 00:07:14.027 11494.006 - 11544.418: 86.4847% ( 14) 00:07:14.027 11544.418 - 11594.831: 86.5704% ( 13) 00:07:14.027 11594.831 - 11645.243: 86.6495% ( 12) 00:07:14.027 11645.243 - 11695.655: 86.7220% ( 11) 00:07:14.027 11695.655 - 11746.068: 86.7946% ( 11) 00:07:14.027 11746.068 - 11796.480: 86.8473% ( 8) 00:07:14.027 11796.480 - 11846.892: 86.9132% ( 10) 00:07:14.027 11846.892 - 11897.305: 86.9792% ( 10) 00:07:14.027 11897.305 - 11947.717: 87.0319% ( 8) 00:07:14.027 11947.717 - 11998.129: 87.0781% ( 7) 00:07:14.027 11998.129 - 12048.542: 87.1176% ( 6) 00:07:14.027 12048.542 - 12098.954: 87.1572% ( 6) 00:07:14.027 12098.954 - 12149.366: 87.2033% ( 7) 00:07:14.027 12149.366 - 12199.778: 87.2363% ( 5) 00:07:14.027 12199.778 - 12250.191: 87.2758% ( 6) 00:07:14.027 12250.191 - 12300.603: 87.3022% ( 4) 00:07:14.027 12300.603 - 12351.015: 87.3286% ( 4) 00:07:14.027 12351.015 - 12401.428: 87.3352% ( 1) 00:07:14.027 12401.428 - 12451.840: 87.3418% ( 1) 00:07:14.027 12502.252 - 12552.665: 87.3484% ( 1) 00:07:14.027 12552.665 - 12603.077: 87.3550% ( 1) 00:07:14.027 12603.077 - 12653.489: 87.3747% ( 3) 00:07:14.027 12653.489 - 12703.902: 87.4209% ( 7) 00:07:14.027 12703.902 - 12754.314: 87.4670% ( 7) 00:07:14.027 12754.314 - 12804.726: 87.5396% ( 11) 00:07:14.027 12804.726 - 12855.138: 87.5989% ( 9) 00:07:14.027 12855.138 - 12905.551: 87.6648% ( 10) 00:07:14.027 12905.551 - 13006.375: 87.9022% ( 36) 00:07:14.027 13006.375 - 13107.200: 88.0999% ( 30) 00:07:14.027 13107.200 - 13208.025: 88.2780% ( 27) 00:07:14.027 13208.025 - 13308.849: 88.4428% ( 25) 00:07:14.027 13308.849 - 13409.674: 88.5944% ( 23) 00:07:14.027 13409.674 - 13510.498: 88.7790% ( 28) 00:07:14.027 13510.498 - 13611.323: 89.0098% ( 35) 00:07:14.027 13611.323 - 13712.148: 89.2405% ( 35) 00:07:14.027 13712.148 - 13812.972: 89.4185% ( 27) 00:07:14.027 13812.972 - 13913.797: 89.6361% ( 33) 00:07:14.027 13913.797 - 14014.622: 89.7745% ( 21) 00:07:14.027 14014.622 - 14115.446: 89.8734% ( 15) 00:07:14.027 14115.446 - 14216.271: 89.9591% ( 13) 00:07:14.027 14216.271 - 14317.095: 90.0580% ( 15) 00:07:14.027 14317.095 - 14417.920: 90.1569% ( 15) 00:07:14.027 14417.920 - 14518.745: 90.2558% ( 15) 00:07:14.027 14518.745 - 14619.569: 90.3943% ( 21) 00:07:14.027 14619.569 - 14720.394: 90.5393% ( 22) 00:07:14.028 14720.394 - 14821.218: 90.6711% ( 20) 00:07:14.028 14821.218 - 14922.043: 90.7898% ( 18) 00:07:14.028 14922.043 - 15022.868: 90.8887% ( 15) 00:07:14.028 15022.868 - 15123.692: 90.9612% ( 11) 00:07:14.028 15123.692 - 15224.517: 91.0074% ( 7) 00:07:14.028 15224.517 - 15325.342: 91.0535% ( 7) 00:07:14.028 15325.342 - 15426.166: 91.0931% ( 6) 00:07:14.028 15426.166 - 15526.991: 91.1326% ( 6) 00:07:14.028 15526.991 - 15627.815: 91.1656% ( 5) 00:07:14.028 15627.815 - 15728.640: 91.2118% ( 7) 00:07:14.028 15728.640 - 15829.465: 91.2645% ( 8) 00:07:14.028 15829.465 - 15930.289: 91.3304% ( 10) 00:07:14.028 15930.289 - 16031.114: 91.3964% ( 10) 00:07:14.028 16031.114 - 16131.938: 91.4491% ( 8) 00:07:14.028 16131.938 - 16232.763: 91.4953% ( 7) 00:07:14.028 16232.763 - 16333.588: 91.5348% ( 6) 00:07:14.028 16333.588 - 16434.412: 91.5810% ( 7) 00:07:14.028 16434.412 - 16535.237: 91.6535% ( 11) 00:07:14.028 16535.237 - 16636.062: 91.7722% ( 18) 00:07:14.028 16636.062 - 16736.886: 91.9304% ( 24) 00:07:14.028 16736.886 - 16837.711: 92.0886% ( 24) 00:07:14.028 16837.711 - 16938.535: 92.2666% ( 27) 00:07:14.028 16938.535 - 17039.360: 92.4117% ( 22) 00:07:14.028 17039.360 - 17140.185: 92.5897% ( 27) 00:07:14.028 17140.185 - 17241.009: 92.8270% ( 36) 00:07:14.028 17241.009 - 17341.834: 93.0775% ( 38) 00:07:14.028 17341.834 - 17442.658: 93.3478% ( 41) 00:07:14.028 17442.658 - 17543.483: 93.6313% ( 43) 00:07:14.028 17543.483 - 17644.308: 93.9346% ( 46) 00:07:14.028 17644.308 - 17745.132: 94.2181% ( 43) 00:07:14.028 17745.132 - 17845.957: 94.5016% ( 43) 00:07:14.028 17845.957 - 17946.782: 94.7587% ( 39) 00:07:14.028 17946.782 - 18047.606: 94.9829% ( 34) 00:07:14.028 18047.606 - 18148.431: 95.2532% ( 41) 00:07:14.028 18148.431 - 18249.255: 95.5498% ( 45) 00:07:14.028 18249.255 - 18350.080: 95.8597% ( 47) 00:07:14.028 18350.080 - 18450.905: 96.1630% ( 46) 00:07:14.028 18450.905 - 18551.729: 96.3739% ( 32) 00:07:14.028 18551.729 - 18652.554: 96.5717% ( 30) 00:07:14.028 18652.554 - 18753.378: 96.7563% ( 28) 00:07:14.028 18753.378 - 18854.203: 96.9277% ( 26) 00:07:14.028 18854.203 - 18955.028: 97.0992% ( 26) 00:07:14.028 18955.028 - 19055.852: 97.2442% ( 22) 00:07:14.028 19055.852 - 19156.677: 97.4552% ( 32) 00:07:14.028 19156.677 - 19257.502: 97.6332% ( 27) 00:07:14.028 19257.502 - 19358.326: 97.7914% ( 24) 00:07:14.028 19358.326 - 19459.151: 97.9628% ( 26) 00:07:14.028 19459.151 - 19559.975: 98.1210% ( 24) 00:07:14.028 19559.975 - 19660.800: 98.2859% ( 25) 00:07:14.028 19660.800 - 19761.625: 98.4177% ( 20) 00:07:14.028 19761.625 - 19862.449: 98.5496% ( 20) 00:07:14.028 19862.449 - 19963.274: 98.6880% ( 21) 00:07:14.028 19963.274 - 20064.098: 98.8067% ( 18) 00:07:14.028 20064.098 - 20164.923: 98.9056% ( 15) 00:07:14.028 20164.923 - 20265.748: 99.0243% ( 18) 00:07:14.028 20265.748 - 20366.572: 99.1100% ( 13) 00:07:14.028 20366.572 - 20467.397: 99.1891% ( 12) 00:07:14.028 20467.397 - 20568.222: 99.2748% ( 13) 00:07:14.028 20568.222 - 20669.046: 99.3341% ( 9) 00:07:14.028 20669.046 - 20769.871: 99.3935% ( 9) 00:07:14.028 20769.871 - 20870.695: 99.4528% ( 9) 00:07:14.028 20870.695 - 20971.520: 99.5187% ( 10) 00:07:14.028 20971.520 - 21072.345: 99.5715% ( 8) 00:07:14.028 21072.345 - 21173.169: 99.5781% ( 1) 00:07:14.028 22988.012 - 23088.837: 99.5912% ( 2) 00:07:14.028 23088.837 - 23189.662: 99.6176% ( 4) 00:07:14.028 23189.662 - 23290.486: 99.6440% ( 4) 00:07:14.028 23290.486 - 23391.311: 99.6638% ( 3) 00:07:14.028 23391.311 - 23492.135: 99.6901% ( 4) 00:07:14.028 23492.135 - 23592.960: 99.7099% ( 3) 00:07:14.028 23592.960 - 23693.785: 99.7297% ( 3) 00:07:14.028 23693.785 - 23794.609: 99.7561% ( 4) 00:07:14.028 23794.609 - 23895.434: 99.7758% ( 3) 00:07:14.028 23895.434 - 23996.258: 99.8022% ( 4) 00:07:14.028 23996.258 - 24097.083: 99.8286% ( 4) 00:07:14.028 24097.083 - 24197.908: 99.8484% ( 3) 00:07:14.028 24197.908 - 24298.732: 99.8747% ( 4) 00:07:14.028 24298.732 - 24399.557: 99.8945% ( 3) 00:07:14.028 24399.557 - 24500.382: 99.9209% ( 4) 00:07:14.028 24500.382 - 24601.206: 99.9407% ( 3) 00:07:14.028 24601.206 - 24702.031: 99.9604% ( 3) 00:07:14.028 24702.031 - 24802.855: 99.9868% ( 4) 00:07:14.028 24802.855 - 24903.680: 100.0000% ( 2) 00:07:14.028 00:07:14.028 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:14.028 ============================================================================== 00:07:14.028 Range in us Cumulative IO count 00:07:14.028 5898.240 - 5923.446: 0.0264% ( 4) 00:07:14.028 5923.446 - 5948.652: 0.0461% ( 3) 00:07:14.028 5948.652 - 5973.858: 0.0857% ( 6) 00:07:14.028 5973.858 - 5999.065: 0.1121% ( 4) 00:07:14.028 5999.065 - 6024.271: 0.1516% ( 6) 00:07:14.028 6024.271 - 6049.477: 0.2373% ( 13) 00:07:14.028 6049.477 - 6074.683: 0.3165% ( 12) 00:07:14.028 6074.683 - 6099.889: 0.4219% ( 16) 00:07:14.028 6099.889 - 6125.095: 0.5934% ( 26) 00:07:14.028 6125.095 - 6150.302: 0.8175% ( 34) 00:07:14.028 6150.302 - 6175.508: 1.1735% ( 54) 00:07:14.028 6175.508 - 6200.714: 1.5691% ( 60) 00:07:14.028 6200.714 - 6225.920: 2.1954% ( 95) 00:07:14.028 6225.920 - 6251.126: 3.1052% ( 138) 00:07:14.028 6251.126 - 6276.332: 4.1469% ( 158) 00:07:14.028 6276.332 - 6301.538: 5.3995% ( 190) 00:07:14.028 6301.538 - 6326.745: 6.8434% ( 219) 00:07:14.028 6326.745 - 6351.951: 8.6168% ( 269) 00:07:14.028 6351.951 - 6377.157: 10.2189% ( 243) 00:07:14.028 6377.157 - 6402.363: 11.8407% ( 246) 00:07:14.028 6402.363 - 6427.569: 13.4230% ( 240) 00:07:14.028 6427.569 - 6452.775: 15.0514% ( 247) 00:07:14.028 6452.775 - 6503.188: 18.3478% ( 500) 00:07:14.028 6503.188 - 6553.600: 21.8025% ( 524) 00:07:14.028 6553.600 - 6604.012: 25.3824% ( 543) 00:07:14.028 6604.012 - 6654.425: 28.9557% ( 542) 00:07:14.028 6654.425 - 6704.837: 32.4960% ( 537) 00:07:14.028 6704.837 - 6755.249: 36.2210% ( 565) 00:07:14.028 6755.249 - 6805.662: 39.7218% ( 531) 00:07:14.028 6805.662 - 6856.074: 43.3083% ( 544) 00:07:14.028 6856.074 - 6906.486: 46.8750% ( 541) 00:07:14.028 6906.486 - 6956.898: 50.3428% ( 526) 00:07:14.028 6956.898 - 7007.311: 53.8766% ( 536) 00:07:14.028 7007.311 - 7057.723: 57.2323% ( 509) 00:07:14.028 7057.723 - 7108.135: 60.3903% ( 479) 00:07:14.028 7108.135 - 7158.548: 63.2120% ( 428) 00:07:14.028 7158.548 - 7208.960: 65.4866% ( 345) 00:07:14.028 7208.960 - 7259.372: 66.8315% ( 204) 00:07:14.028 7259.372 - 7309.785: 67.8138% ( 149) 00:07:14.028 7309.785 - 7360.197: 68.5852% ( 117) 00:07:14.028 7360.197 - 7410.609: 69.3236% ( 112) 00:07:14.028 7410.609 - 7461.022: 69.9763% ( 99) 00:07:14.028 7461.022 - 7511.434: 70.5960% ( 94) 00:07:14.028 7511.434 - 7561.846: 71.2421% ( 98) 00:07:14.028 7561.846 - 7612.258: 71.7102% ( 71) 00:07:14.028 7612.258 - 7662.671: 72.1387% ( 65) 00:07:14.028 7662.671 - 7713.083: 72.5541% ( 63) 00:07:14.028 7713.083 - 7763.495: 72.9892% ( 66) 00:07:14.028 7763.495 - 7813.908: 73.4507% ( 70) 00:07:14.028 7813.908 - 7864.320: 73.9583% ( 77) 00:07:14.028 7864.320 - 7914.732: 74.4594% ( 76) 00:07:14.028 7914.732 - 7965.145: 74.9143% ( 69) 00:07:14.028 7965.145 - 8015.557: 75.3626% ( 68) 00:07:14.028 8015.557 - 8065.969: 75.7450% ( 58) 00:07:14.028 8065.969 - 8116.382: 76.1537% ( 62) 00:07:14.028 8116.382 - 8166.794: 76.5098% ( 54) 00:07:14.028 8166.794 - 8217.206: 76.8526% ( 52) 00:07:14.028 8217.206 - 8267.618: 77.1493% ( 45) 00:07:14.028 8267.618 - 8318.031: 77.4525% ( 46) 00:07:14.028 8318.031 - 8368.443: 77.7756% ( 49) 00:07:14.028 8368.443 - 8418.855: 78.0525% ( 42) 00:07:14.028 8418.855 - 8469.268: 78.3162% ( 40) 00:07:14.028 8469.268 - 8519.680: 78.5667% ( 38) 00:07:14.028 8519.680 - 8570.092: 78.8172% ( 38) 00:07:14.028 8570.092 - 8620.505: 79.0282% ( 32) 00:07:14.028 8620.505 - 8670.917: 79.2590% ( 35) 00:07:14.028 8670.917 - 8721.329: 79.4633% ( 31) 00:07:14.028 8721.329 - 8771.742: 79.6479% ( 28) 00:07:14.028 8771.742 - 8822.154: 79.7798% ( 20) 00:07:14.028 8822.154 - 8872.566: 79.8853% ( 16) 00:07:14.028 8872.566 - 8922.978: 80.0105% ( 19) 00:07:14.028 8922.978 - 8973.391: 80.1358% ( 19) 00:07:14.028 8973.391 - 9023.803: 80.2809% ( 22) 00:07:14.028 9023.803 - 9074.215: 80.4391% ( 24) 00:07:14.028 9074.215 - 9124.628: 80.5709% ( 20) 00:07:14.028 9124.628 - 9175.040: 80.7094% ( 21) 00:07:14.028 9175.040 - 9225.452: 80.8215% ( 17) 00:07:14.028 9225.452 - 9275.865: 80.9467% ( 19) 00:07:14.028 9275.865 - 9326.277: 81.0720% ( 19) 00:07:14.028 9326.277 - 9376.689: 81.2039% ( 20) 00:07:14.029 9376.689 - 9427.102: 81.3423% ( 21) 00:07:14.029 9427.102 - 9477.514: 81.4873% ( 22) 00:07:14.029 9477.514 - 9527.926: 81.6324% ( 22) 00:07:14.029 9527.926 - 9578.338: 81.7708% ( 21) 00:07:14.029 9578.338 - 9628.751: 81.9225% ( 23) 00:07:14.029 9628.751 - 9679.163: 82.0873% ( 25) 00:07:14.029 9679.163 - 9729.575: 82.2257% ( 21) 00:07:14.029 9729.575 - 9779.988: 82.3576% ( 20) 00:07:14.029 9779.988 - 9830.400: 82.4960% ( 21) 00:07:14.029 9830.400 - 9880.812: 82.6477% ( 23) 00:07:14.029 9880.812 - 9931.225: 82.7993% ( 23) 00:07:14.029 9931.225 - 9981.637: 82.9575% ( 24) 00:07:14.029 9981.637 - 10032.049: 83.0960% ( 21) 00:07:14.029 10032.049 - 10082.462: 83.2542% ( 24) 00:07:14.029 10082.462 - 10132.874: 83.4124% ( 24) 00:07:14.029 10132.874 - 10183.286: 83.6234% ( 32) 00:07:14.029 10183.286 - 10233.698: 83.8014% ( 27) 00:07:14.029 10233.698 - 10284.111: 83.9728% ( 26) 00:07:14.029 10284.111 - 10334.523: 84.1508% ( 27) 00:07:14.029 10334.523 - 10384.935: 84.3025% ( 23) 00:07:14.029 10384.935 - 10435.348: 84.4343% ( 20) 00:07:14.029 10435.348 - 10485.760: 84.5662% ( 20) 00:07:14.029 10485.760 - 10536.172: 84.6783% ( 17) 00:07:14.029 10536.172 - 10586.585: 84.7969% ( 18) 00:07:14.029 10586.585 - 10636.997: 84.9354% ( 21) 00:07:14.029 10636.997 - 10687.409: 85.0804% ( 22) 00:07:14.029 10687.409 - 10737.822: 85.2387% ( 24) 00:07:14.029 10737.822 - 10788.234: 85.3837% ( 22) 00:07:14.029 10788.234 - 10838.646: 85.5156% ( 20) 00:07:14.029 10838.646 - 10889.058: 85.6408% ( 19) 00:07:14.029 10889.058 - 10939.471: 85.7727% ( 20) 00:07:14.029 10939.471 - 10989.883: 85.8848% ( 17) 00:07:14.029 10989.883 - 11040.295: 85.9705% ( 13) 00:07:14.029 11040.295 - 11090.708: 86.0364% ( 10) 00:07:14.029 11090.708 - 11141.120: 86.0759% ( 6) 00:07:14.029 11141.120 - 11191.532: 86.1287% ( 8) 00:07:14.029 11191.532 - 11241.945: 86.1682% ( 6) 00:07:14.029 11241.945 - 11292.357: 86.2210% ( 8) 00:07:14.029 11292.357 - 11342.769: 86.2935% ( 11) 00:07:14.029 11342.769 - 11393.182: 86.3594% ( 10) 00:07:14.029 11393.182 - 11443.594: 86.4254% ( 10) 00:07:14.029 11443.594 - 11494.006: 86.4781% ( 8) 00:07:14.029 11494.006 - 11544.418: 86.5440% ( 10) 00:07:14.029 11544.418 - 11594.831: 86.6100% ( 10) 00:07:14.029 11594.831 - 11645.243: 86.6627% ( 8) 00:07:14.029 11645.243 - 11695.655: 86.6825% ( 3) 00:07:14.029 11695.655 - 11746.068: 86.7023% ( 3) 00:07:14.029 11746.068 - 11796.480: 86.7220% ( 3) 00:07:14.029 11796.480 - 11846.892: 86.7484% ( 4) 00:07:14.029 11846.892 - 11897.305: 86.7682% ( 3) 00:07:14.029 11897.305 - 11947.717: 86.7880% ( 3) 00:07:14.029 11947.717 - 11998.129: 86.8078% ( 3) 00:07:14.029 11998.129 - 12048.542: 86.8275% ( 3) 00:07:14.029 12048.542 - 12098.954: 86.8539% ( 4) 00:07:14.029 12098.954 - 12149.366: 86.8737% ( 3) 00:07:14.029 12149.366 - 12199.778: 86.8935% ( 3) 00:07:14.029 12199.778 - 12250.191: 86.9132% ( 3) 00:07:14.029 12250.191 - 12300.603: 86.9198% ( 1) 00:07:14.029 12300.603 - 12351.015: 86.9264% ( 1) 00:07:14.029 12351.015 - 12401.428: 86.9660% ( 6) 00:07:14.029 12401.428 - 12451.840: 86.9924% ( 4) 00:07:14.029 12451.840 - 12502.252: 87.0253% ( 5) 00:07:14.029 12502.252 - 12552.665: 87.0912% ( 10) 00:07:14.029 12552.665 - 12603.077: 87.1506% ( 9) 00:07:14.029 12603.077 - 12653.489: 87.2033% ( 8) 00:07:14.029 12653.489 - 12703.902: 87.2693% ( 10) 00:07:14.029 12703.902 - 12754.314: 87.3484% ( 12) 00:07:14.029 12754.314 - 12804.726: 87.4209% ( 11) 00:07:14.029 12804.726 - 12855.138: 87.5066% ( 13) 00:07:14.029 12855.138 - 12905.551: 87.5725% ( 10) 00:07:14.029 12905.551 - 13006.375: 87.7307% ( 24) 00:07:14.029 13006.375 - 13107.200: 87.9022% ( 26) 00:07:14.029 13107.200 - 13208.025: 88.1725% ( 41) 00:07:14.029 13208.025 - 13308.849: 88.4560% ( 43) 00:07:14.029 13308.849 - 13409.674: 88.6933% ( 36) 00:07:14.029 13409.674 - 13510.498: 88.9702% ( 42) 00:07:14.029 13510.498 - 13611.323: 89.2207% ( 38) 00:07:14.029 13611.323 - 13712.148: 89.4185% ( 30) 00:07:14.029 13712.148 - 13812.972: 89.6031% ( 28) 00:07:14.029 13812.972 - 13913.797: 89.7482% ( 22) 00:07:14.029 13913.797 - 14014.622: 89.8998% ( 23) 00:07:14.029 14014.622 - 14115.446: 90.0580% ( 24) 00:07:14.029 14115.446 - 14216.271: 90.2031% ( 22) 00:07:14.029 14216.271 - 14317.095: 90.3283% ( 19) 00:07:14.029 14317.095 - 14417.920: 90.4734% ( 22) 00:07:14.029 14417.920 - 14518.745: 90.5920% ( 18) 00:07:14.029 14518.745 - 14619.569: 90.6646% ( 11) 00:07:14.029 14619.569 - 14720.394: 90.7371% ( 11) 00:07:14.029 14720.394 - 14821.218: 90.8228% ( 13) 00:07:14.029 14821.218 - 14922.043: 90.9019% ( 12) 00:07:14.029 14922.043 - 15022.868: 90.9876% ( 13) 00:07:14.029 15022.868 - 15123.692: 91.0469% ( 9) 00:07:14.029 15123.692 - 15224.517: 91.0931% ( 7) 00:07:14.029 15224.517 - 15325.342: 91.1986% ( 16) 00:07:14.029 15325.342 - 15426.166: 91.2975% ( 15) 00:07:14.029 15426.166 - 15526.991: 91.3766% ( 12) 00:07:14.029 15526.991 - 15627.815: 91.4557% ( 12) 00:07:14.029 15627.815 - 15728.640: 91.5348% ( 12) 00:07:14.029 15728.640 - 15829.465: 91.6667% ( 20) 00:07:14.029 15829.465 - 15930.289: 91.7787% ( 17) 00:07:14.029 15930.289 - 16031.114: 91.8908% ( 17) 00:07:14.029 16031.114 - 16131.938: 91.9963% ( 16) 00:07:14.029 16131.938 - 16232.763: 92.1150% ( 18) 00:07:14.029 16232.763 - 16333.588: 92.2139% ( 15) 00:07:14.029 16333.588 - 16434.412: 92.2534% ( 6) 00:07:14.029 16434.412 - 16535.237: 92.3985% ( 22) 00:07:14.029 16535.237 - 16636.062: 92.5040% ( 16) 00:07:14.029 16636.062 - 16736.886: 92.6424% ( 21) 00:07:14.029 16736.886 - 16837.711: 92.7809% ( 21) 00:07:14.029 16837.711 - 16938.535: 92.8863% ( 16) 00:07:14.029 16938.535 - 17039.360: 92.9786% ( 14) 00:07:14.029 17039.360 - 17140.185: 93.0907% ( 17) 00:07:14.029 17140.185 - 17241.009: 93.2160% ( 19) 00:07:14.029 17241.009 - 17341.834: 93.3808% ( 25) 00:07:14.029 17341.834 - 17442.658: 93.5588% ( 27) 00:07:14.029 17442.658 - 17543.483: 93.7434% ( 28) 00:07:14.029 17543.483 - 17644.308: 94.0005% ( 39) 00:07:14.029 17644.308 - 17745.132: 94.2906% ( 44) 00:07:14.029 17745.132 - 17845.957: 94.5214% ( 35) 00:07:14.029 17845.957 - 17946.782: 94.7455% ( 34) 00:07:14.029 17946.782 - 18047.606: 95.0554% ( 47) 00:07:14.029 18047.606 - 18148.431: 95.4180% ( 55) 00:07:14.029 18148.431 - 18249.255: 95.7147% ( 45) 00:07:14.029 18249.255 - 18350.080: 95.9916% ( 42) 00:07:14.029 18350.080 - 18450.905: 96.2553% ( 40) 00:07:14.029 18450.905 - 18551.729: 96.4992% ( 37) 00:07:14.029 18551.729 - 18652.554: 96.7300% ( 35) 00:07:14.029 18652.554 - 18753.378: 96.9541% ( 34) 00:07:14.029 18753.378 - 18854.203: 97.1321% ( 27) 00:07:14.029 18854.203 - 18955.028: 97.3167% ( 28) 00:07:14.029 18955.028 - 19055.852: 97.4618% ( 22) 00:07:14.029 19055.852 - 19156.677: 97.5738% ( 17) 00:07:14.029 19156.677 - 19257.502: 97.6727% ( 15) 00:07:14.029 19257.502 - 19358.326: 97.7584% ( 13) 00:07:14.029 19358.326 - 19459.151: 97.8573% ( 15) 00:07:14.029 19459.151 - 19559.975: 97.9496% ( 14) 00:07:14.029 19559.975 - 19660.800: 98.0156% ( 10) 00:07:14.029 19660.800 - 19761.625: 98.0881% ( 11) 00:07:14.029 19761.625 - 19862.449: 98.1738% ( 13) 00:07:14.029 19862.449 - 19963.274: 98.3056% ( 20) 00:07:14.029 19963.274 - 20064.098: 98.4309% ( 19) 00:07:14.029 20064.098 - 20164.923: 98.5298% ( 15) 00:07:14.029 20164.923 - 20265.748: 98.5957% ( 10) 00:07:14.029 20265.748 - 20366.572: 98.6551% ( 9) 00:07:14.029 20366.572 - 20467.397: 98.7144% ( 9) 00:07:14.029 20467.397 - 20568.222: 98.7935% ( 12) 00:07:14.029 20568.222 - 20669.046: 98.8990% ( 16) 00:07:14.029 20669.046 - 20769.871: 98.9781% ( 12) 00:07:14.029 20769.871 - 20870.695: 99.0836% ( 16) 00:07:14.029 20870.695 - 20971.520: 99.1759% ( 14) 00:07:14.029 20971.520 - 21072.345: 99.2814% ( 16) 00:07:14.029 21072.345 - 21173.169: 99.3407% ( 9) 00:07:14.029 21173.169 - 21273.994: 99.4132% ( 11) 00:07:14.029 21273.994 - 21374.818: 99.4989% ( 13) 00:07:14.029 21374.818 - 21475.643: 99.5912% ( 14) 00:07:14.029 21475.643 - 21576.468: 99.6638% ( 11) 00:07:14.029 21576.468 - 21677.292: 99.6835% ( 3) 00:07:14.029 21677.292 - 21778.117: 99.7099% ( 4) 00:07:14.029 21778.117 - 21878.942: 99.7363% ( 4) 00:07:14.029 21878.942 - 21979.766: 99.7561% ( 3) 00:07:14.029 21979.766 - 22080.591: 99.7824% ( 4) 00:07:14.029 22080.591 - 22181.415: 99.8022% ( 3) 00:07:14.029 22181.415 - 22282.240: 99.8286% ( 4) 00:07:14.029 22282.240 - 22383.065: 99.8550% ( 4) 00:07:14.029 22383.065 - 22483.889: 99.8747% ( 3) 00:07:14.029 22483.889 - 22584.714: 99.9011% ( 4) 00:07:14.029 22584.714 - 22685.538: 99.9209% ( 3) 00:07:14.029 22685.538 - 22786.363: 99.9473% ( 4) 00:07:14.029 22786.363 - 22887.188: 99.9670% ( 3) 00:07:14.029 22887.188 - 22988.012: 99.9934% ( 4) 00:07:14.029 22988.012 - 23088.837: 100.0000% ( 1) 00:07:14.029 00:07:14.029 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:14.029 ============================================================================== 00:07:14.029 Range in us Cumulative IO count 00:07:14.029 5898.240 - 5923.446: 0.0132% ( 2) 00:07:14.029 5923.446 - 5948.652: 0.0659% ( 8) 00:07:14.029 5948.652 - 5973.858: 0.1187% ( 8) 00:07:14.029 5973.858 - 5999.065: 0.1648% ( 7) 00:07:14.029 5999.065 - 6024.271: 0.2242% ( 9) 00:07:14.029 6024.271 - 6049.477: 0.2835% ( 9) 00:07:14.029 6049.477 - 6074.683: 0.3758% ( 14) 00:07:14.029 6074.683 - 6099.889: 0.4747% ( 15) 00:07:14.029 6099.889 - 6125.095: 0.5868% ( 17) 00:07:14.029 6125.095 - 6150.302: 0.7516% ( 25) 00:07:14.029 6150.302 - 6175.508: 1.0153% ( 40) 00:07:14.029 6175.508 - 6200.714: 1.4900% ( 72) 00:07:14.029 6200.714 - 6225.920: 2.1822% ( 105) 00:07:14.029 6225.920 - 6251.126: 2.9404% ( 115) 00:07:14.029 6251.126 - 6276.332: 3.8436% ( 137) 00:07:14.029 6276.332 - 6301.538: 5.2413% ( 212) 00:07:14.029 6301.538 - 6326.745: 6.8170% ( 239) 00:07:14.029 6326.745 - 6351.951: 8.4718% ( 251) 00:07:14.029 6351.951 - 6377.157: 10.1134% ( 249) 00:07:14.029 6377.157 - 6402.363: 11.8803% ( 268) 00:07:14.029 6402.363 - 6427.569: 13.5417% ( 252) 00:07:14.029 6427.569 - 6452.775: 15.0251% ( 225) 00:07:14.029 6452.775 - 6503.188: 18.4072% ( 513) 00:07:14.029 6503.188 - 6553.600: 21.5585% ( 478) 00:07:14.029 6553.600 - 6604.012: 25.0725% ( 533) 00:07:14.029 6604.012 - 6654.425: 28.5272% ( 524) 00:07:14.029 6654.425 - 6704.837: 31.9884% ( 525) 00:07:14.029 6704.837 - 6755.249: 35.6276% ( 552) 00:07:14.030 6755.249 - 6805.662: 39.3921% ( 571) 00:07:14.030 6805.662 - 6856.074: 42.8797% ( 529) 00:07:14.030 6856.074 - 6906.486: 46.3410% ( 525) 00:07:14.030 6906.486 - 6956.898: 49.8945% ( 539) 00:07:14.030 6956.898 - 7007.311: 53.4415% ( 538) 00:07:14.030 7007.311 - 7057.723: 56.7774% ( 506) 00:07:14.030 7057.723 - 7108.135: 59.9947% ( 488) 00:07:14.030 7108.135 - 7158.548: 62.8626% ( 435) 00:07:14.030 7158.548 - 7208.960: 65.1437% ( 346) 00:07:14.030 7208.960 - 7259.372: 66.4689% ( 201) 00:07:14.030 7259.372 - 7309.785: 67.4578% ( 150) 00:07:14.030 7309.785 - 7360.197: 68.2819% ( 125) 00:07:14.030 7360.197 - 7410.609: 69.0928% ( 123) 00:07:14.030 7410.609 - 7461.022: 69.8774% ( 119) 00:07:14.030 7461.022 - 7511.434: 70.6619% ( 119) 00:07:14.030 7511.434 - 7561.846: 71.2948% ( 96) 00:07:14.030 7561.846 - 7612.258: 71.8420% ( 83) 00:07:14.030 7612.258 - 7662.671: 72.2640% ( 64) 00:07:14.030 7662.671 - 7713.083: 72.6595% ( 60) 00:07:14.030 7713.083 - 7763.495: 73.0815% ( 64) 00:07:14.030 7763.495 - 7813.908: 73.5100% ( 65) 00:07:14.030 7813.908 - 7864.320: 73.9451% ( 66) 00:07:14.030 7864.320 - 7914.732: 74.3803% ( 66) 00:07:14.030 7914.732 - 7965.145: 74.7956% ( 63) 00:07:14.030 7965.145 - 8015.557: 75.2835% ( 74) 00:07:14.030 8015.557 - 8065.969: 75.7120% ( 65) 00:07:14.030 8065.969 - 8116.382: 76.1603% ( 68) 00:07:14.030 8116.382 - 8166.794: 76.6284% ( 71) 00:07:14.030 8166.794 - 8217.206: 77.0108% ( 58) 00:07:14.030 8217.206 - 8267.618: 77.3009% ( 44) 00:07:14.030 8267.618 - 8318.031: 77.5712% ( 41) 00:07:14.030 8318.031 - 8368.443: 77.8415% ( 41) 00:07:14.030 8368.443 - 8418.855: 78.1184% ( 42) 00:07:14.030 8418.855 - 8469.268: 78.3294% ( 32) 00:07:14.030 8469.268 - 8519.680: 78.5272% ( 30) 00:07:14.030 8519.680 - 8570.092: 78.7249% ( 30) 00:07:14.030 8570.092 - 8620.505: 78.9161% ( 29) 00:07:14.030 8620.505 - 8670.917: 79.1007% ( 28) 00:07:14.030 8670.917 - 8721.329: 79.3315% ( 35) 00:07:14.030 8721.329 - 8771.742: 79.5293% ( 30) 00:07:14.030 8771.742 - 8822.154: 79.7073% ( 27) 00:07:14.030 8822.154 - 8872.566: 79.9314% ( 34) 00:07:14.030 8872.566 - 8922.978: 80.1556% ( 34) 00:07:14.030 8922.978 - 8973.391: 80.3270% ( 26) 00:07:14.030 8973.391 - 9023.803: 80.4918% ( 25) 00:07:14.030 9023.803 - 9074.215: 80.6896% ( 30) 00:07:14.030 9074.215 - 9124.628: 80.8412% ( 23) 00:07:14.030 9124.628 - 9175.040: 80.9995% ( 24) 00:07:14.030 9175.040 - 9225.452: 81.1709% ( 26) 00:07:14.030 9225.452 - 9275.865: 81.3093% ( 21) 00:07:14.030 9275.865 - 9326.277: 81.4346% ( 19) 00:07:14.030 9326.277 - 9376.689: 81.5533% ( 18) 00:07:14.030 9376.689 - 9427.102: 81.6785% ( 19) 00:07:14.030 9427.102 - 9477.514: 81.8236% ( 22) 00:07:14.030 9477.514 - 9527.926: 81.9422% ( 18) 00:07:14.030 9527.926 - 9578.338: 82.1071% ( 25) 00:07:14.030 9578.338 - 9628.751: 82.2521% ( 22) 00:07:14.030 9628.751 - 9679.163: 82.4433% ( 29) 00:07:14.030 9679.163 - 9729.575: 82.5752% ( 20) 00:07:14.030 9729.575 - 9779.988: 82.7268% ( 23) 00:07:14.030 9779.988 - 9830.400: 82.8389% ( 17) 00:07:14.030 9830.400 - 9880.812: 82.9575% ( 18) 00:07:14.030 9880.812 - 9931.225: 83.0696% ( 17) 00:07:14.030 9931.225 - 9981.637: 83.1619% ( 14) 00:07:14.030 9981.637 - 10032.049: 83.2147% ( 8) 00:07:14.030 10032.049 - 10082.462: 83.2740% ( 9) 00:07:14.030 10082.462 - 10132.874: 83.3399% ( 10) 00:07:14.030 10132.874 - 10183.286: 83.4322% ( 14) 00:07:14.030 10183.286 - 10233.698: 83.5509% ( 18) 00:07:14.030 10233.698 - 10284.111: 83.6959% ( 22) 00:07:14.030 10284.111 - 10334.523: 83.8080% ( 17) 00:07:14.030 10334.523 - 10384.935: 83.9465% ( 21) 00:07:14.030 10384.935 - 10435.348: 84.0915% ( 22) 00:07:14.030 10435.348 - 10485.760: 84.2366% ( 22) 00:07:14.030 10485.760 - 10536.172: 84.3552% ( 18) 00:07:14.030 10536.172 - 10586.585: 84.5003% ( 22) 00:07:14.030 10586.585 - 10636.997: 84.6321% ( 20) 00:07:14.030 10636.997 - 10687.409: 84.7772% ( 22) 00:07:14.030 10687.409 - 10737.822: 84.9156% ( 21) 00:07:14.030 10737.822 - 10788.234: 85.0936% ( 27) 00:07:14.030 10788.234 - 10838.646: 85.2518% ( 24) 00:07:14.030 10838.646 - 10889.058: 85.3903% ( 21) 00:07:14.030 10889.058 - 10939.471: 85.5419% ( 23) 00:07:14.030 10939.471 - 10989.883: 85.6804% ( 21) 00:07:14.030 10989.883 - 11040.295: 85.8254% ( 22) 00:07:14.030 11040.295 - 11090.708: 85.9507% ( 19) 00:07:14.030 11090.708 - 11141.120: 86.0759% ( 19) 00:07:14.030 11141.120 - 11191.532: 86.1814% ( 16) 00:07:14.030 11191.532 - 11241.945: 86.2737% ( 14) 00:07:14.030 11241.945 - 11292.357: 86.3528% ( 12) 00:07:14.030 11292.357 - 11342.769: 86.4188% ( 10) 00:07:14.030 11342.769 - 11393.182: 86.4847% ( 10) 00:07:14.030 11393.182 - 11443.594: 86.5243% ( 6) 00:07:14.030 11443.594 - 11494.006: 86.5704% ( 7) 00:07:14.030 11494.006 - 11544.418: 86.6166% ( 7) 00:07:14.030 11544.418 - 11594.831: 86.6561% ( 6) 00:07:14.030 11594.831 - 11645.243: 86.7023% ( 7) 00:07:14.030 11645.243 - 11695.655: 86.7220% ( 3) 00:07:14.030 11695.655 - 11746.068: 86.7418% ( 3) 00:07:14.030 11746.068 - 11796.480: 86.7616% ( 3) 00:07:14.030 11796.480 - 11846.892: 86.7880% ( 4) 00:07:14.030 11846.892 - 11897.305: 86.8078% ( 3) 00:07:14.030 11897.305 - 11947.717: 86.8275% ( 3) 00:07:14.030 11947.717 - 11998.129: 86.8473% ( 3) 00:07:14.030 11998.129 - 12048.542: 86.8803% ( 5) 00:07:14.030 12048.542 - 12098.954: 86.9264% ( 7) 00:07:14.030 12098.954 - 12149.366: 86.9660% ( 6) 00:07:14.030 12149.366 - 12199.778: 86.9924% ( 4) 00:07:14.030 12199.778 - 12250.191: 87.0187% ( 4) 00:07:14.030 12250.191 - 12300.603: 87.0715% ( 8) 00:07:14.030 12300.603 - 12351.015: 87.1572% ( 13) 00:07:14.030 12351.015 - 12401.428: 87.2099% ( 8) 00:07:14.030 12401.428 - 12451.840: 87.2693% ( 9) 00:07:14.030 12451.840 - 12502.252: 87.3286% ( 9) 00:07:14.030 12502.252 - 12552.665: 87.4011% ( 11) 00:07:14.030 12552.665 - 12603.077: 87.4802% ( 12) 00:07:14.030 12603.077 - 12653.489: 87.5527% ( 11) 00:07:14.030 12653.489 - 12703.902: 87.6319% ( 12) 00:07:14.030 12703.902 - 12754.314: 87.7110% ( 12) 00:07:14.030 12754.314 - 12804.726: 87.7835% ( 11) 00:07:14.030 12804.726 - 12855.138: 87.8692% ( 13) 00:07:14.030 12855.138 - 12905.551: 87.9615% ( 14) 00:07:14.030 12905.551 - 13006.375: 88.1527% ( 29) 00:07:14.030 13006.375 - 13107.200: 88.4098% ( 39) 00:07:14.030 13107.200 - 13208.025: 88.6274% ( 33) 00:07:14.030 13208.025 - 13308.849: 88.8318% ( 31) 00:07:14.030 13308.849 - 13409.674: 89.0229% ( 29) 00:07:14.030 13409.674 - 13510.498: 89.1746% ( 23) 00:07:14.030 13510.498 - 13611.323: 89.3130% ( 21) 00:07:14.030 13611.323 - 13712.148: 89.4383% ( 19) 00:07:14.030 13712.148 - 13812.972: 89.5636% ( 19) 00:07:14.030 13812.972 - 13913.797: 89.6756% ( 17) 00:07:14.030 13913.797 - 14014.622: 89.7679% ( 14) 00:07:14.030 14014.622 - 14115.446: 89.8207% ( 8) 00:07:14.030 14115.446 - 14216.271: 89.9328% ( 17) 00:07:14.030 14216.271 - 14317.095: 90.0185% ( 13) 00:07:14.030 14317.095 - 14417.920: 90.1437% ( 19) 00:07:14.030 14417.920 - 14518.745: 90.2756% ( 20) 00:07:14.030 14518.745 - 14619.569: 90.4206% ( 22) 00:07:14.030 14619.569 - 14720.394: 90.5657% ( 22) 00:07:14.030 14720.394 - 14821.218: 90.6909% ( 19) 00:07:14.030 14821.218 - 14922.043: 90.8426% ( 23) 00:07:14.030 14922.043 - 15022.868: 91.0272% ( 28) 00:07:14.030 15022.868 - 15123.692: 91.1722% ( 22) 00:07:14.030 15123.692 - 15224.517: 91.3964% ( 34) 00:07:14.030 15224.517 - 15325.342: 91.5810% ( 28) 00:07:14.030 15325.342 - 15426.166: 91.7524% ( 26) 00:07:14.030 15426.166 - 15526.991: 91.9238% ( 26) 00:07:14.030 15526.991 - 15627.815: 92.0688% ( 22) 00:07:14.030 15627.815 - 15728.640: 92.2007% ( 20) 00:07:14.030 15728.640 - 15829.465: 92.3391% ( 21) 00:07:14.030 15829.465 - 15930.289: 92.4842% ( 22) 00:07:14.030 15930.289 - 16031.114: 92.6094% ( 19) 00:07:14.030 16031.114 - 16131.938: 92.7149% ( 16) 00:07:14.030 16131.938 - 16232.763: 92.7940% ( 12) 00:07:14.030 16232.763 - 16333.588: 92.8204% ( 4) 00:07:14.030 16333.588 - 16434.412: 92.8270% ( 1) 00:07:14.030 16636.062 - 16736.886: 92.8402% ( 2) 00:07:14.030 16736.886 - 16837.711: 92.8732% ( 5) 00:07:14.030 16837.711 - 16938.535: 92.9391% ( 10) 00:07:14.030 16938.535 - 17039.360: 93.0380% ( 15) 00:07:14.030 17039.360 - 17140.185: 93.1237% ( 13) 00:07:14.030 17140.185 - 17241.009: 93.2687% ( 22) 00:07:14.030 17241.009 - 17341.834: 93.3808% ( 17) 00:07:14.030 17341.834 - 17442.658: 93.5588% ( 27) 00:07:14.030 17442.658 - 17543.483: 93.7764% ( 33) 00:07:14.030 17543.483 - 17644.308: 94.0994% ( 49) 00:07:14.030 17644.308 - 17745.132: 94.4357% ( 51) 00:07:14.030 17745.132 - 17845.957: 94.6994% ( 40) 00:07:14.030 17845.957 - 17946.782: 94.9829% ( 43) 00:07:14.030 17946.782 - 18047.606: 95.2861% ( 46) 00:07:14.030 18047.606 - 18148.431: 95.6290% ( 52) 00:07:14.030 18148.431 - 18249.255: 95.8597% ( 35) 00:07:14.030 18249.255 - 18350.080: 96.1102% ( 38) 00:07:14.030 18350.080 - 18450.905: 96.4003% ( 44) 00:07:14.030 18450.905 - 18551.729: 96.6443% ( 37) 00:07:14.030 18551.729 - 18652.554: 96.7893% ( 22) 00:07:14.030 18652.554 - 18753.378: 96.9409% ( 23) 00:07:14.030 18753.378 - 18854.203: 97.1189% ( 27) 00:07:14.030 18854.203 - 18955.028: 97.2903% ( 26) 00:07:14.030 18955.028 - 19055.852: 97.4288% ( 21) 00:07:14.030 19055.852 - 19156.677: 97.5672% ( 21) 00:07:14.030 19156.677 - 19257.502: 97.6859% ( 18) 00:07:14.030 19257.502 - 19358.326: 97.7980% ( 17) 00:07:14.031 19358.326 - 19459.151: 97.9167% ( 18) 00:07:14.031 19459.151 - 19559.975: 98.0222% ( 16) 00:07:14.031 19559.975 - 19660.800: 98.1408% ( 18) 00:07:14.031 19660.800 - 19761.625: 98.2793% ( 21) 00:07:14.031 19761.625 - 19862.449: 98.4045% ( 19) 00:07:14.031 19862.449 - 19963.274: 98.4902% ( 13) 00:07:14.031 19963.274 - 20064.098: 98.5562% ( 10) 00:07:14.031 20064.098 - 20164.923: 98.5957% ( 6) 00:07:14.031 20164.923 - 20265.748: 98.6287% ( 5) 00:07:14.031 20265.748 - 20366.572: 98.6682% ( 6) 00:07:14.031 20366.572 - 20467.397: 98.7144% ( 7) 00:07:14.031 20467.397 - 20568.222: 98.7605% ( 7) 00:07:14.031 20568.222 - 20669.046: 98.8001% ( 6) 00:07:14.031 20669.046 - 20769.871: 98.8726% ( 11) 00:07:14.031 20769.871 - 20870.695: 98.9979% ( 19) 00:07:14.031 20870.695 - 20971.520: 99.1166% ( 18) 00:07:14.031 20971.520 - 21072.345: 99.2484% ( 20) 00:07:14.031 21072.345 - 21173.169: 99.3803% ( 20) 00:07:14.031 21173.169 - 21273.994: 99.4924% ( 17) 00:07:14.031 21273.994 - 21374.818: 99.5912% ( 15) 00:07:14.031 21374.818 - 21475.643: 99.6835% ( 14) 00:07:14.031 21475.643 - 21576.468: 99.7824% ( 15) 00:07:14.031 21576.468 - 21677.292: 99.8747% ( 14) 00:07:14.031 21677.292 - 21778.117: 99.9539% ( 12) 00:07:14.031 21778.117 - 21878.942: 99.9934% ( 6) 00:07:14.031 21878.942 - 21979.766: 100.0000% ( 1) 00:07:14.031 00:07:14.031 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:14.031 ============================================================================== 00:07:14.031 Range in us Cumulative IO count 00:07:14.031 5898.240 - 5923.446: 0.0330% ( 5) 00:07:14.031 5923.446 - 5948.652: 0.0923% ( 9) 00:07:14.031 5948.652 - 5973.858: 0.1319% ( 6) 00:07:14.031 5973.858 - 5999.065: 0.1780% ( 7) 00:07:14.031 5999.065 - 6024.271: 0.2307% ( 8) 00:07:14.031 6024.271 - 6049.477: 0.3033% ( 11) 00:07:14.031 6049.477 - 6074.683: 0.3824% ( 12) 00:07:14.031 6074.683 - 6099.889: 0.4813% ( 15) 00:07:14.031 6099.889 - 6125.095: 0.6197% ( 21) 00:07:14.031 6125.095 - 6150.302: 0.8175% ( 30) 00:07:14.031 6150.302 - 6175.508: 1.0483% ( 35) 00:07:14.031 6175.508 - 6200.714: 1.5164% ( 71) 00:07:14.031 6200.714 - 6225.920: 2.0767% ( 85) 00:07:14.031 6225.920 - 6251.126: 2.9338% ( 130) 00:07:14.031 6251.126 - 6276.332: 3.9491% ( 154) 00:07:14.031 6276.332 - 6301.538: 5.2874% ( 203) 00:07:14.031 6301.538 - 6326.745: 6.9027% ( 245) 00:07:14.031 6326.745 - 6351.951: 8.4454% ( 234) 00:07:14.031 6351.951 - 6377.157: 10.0870% ( 249) 00:07:14.031 6377.157 - 6402.363: 11.7880% ( 258) 00:07:14.031 6402.363 - 6427.569: 13.5417% ( 266) 00:07:14.031 6427.569 - 6452.775: 15.0448% ( 228) 00:07:14.031 6452.775 - 6503.188: 18.3544% ( 502) 00:07:14.031 6503.188 - 6553.600: 21.4399% ( 468) 00:07:14.031 6553.600 - 6604.012: 24.9539% ( 533) 00:07:14.031 6604.012 - 6654.425: 28.4349% ( 528) 00:07:14.031 6654.425 - 6704.837: 31.9225% ( 529) 00:07:14.031 6704.837 - 6755.249: 35.5485% ( 550) 00:07:14.031 6755.249 - 6805.662: 39.0691% ( 534) 00:07:14.031 6805.662 - 6856.074: 42.5567% ( 529) 00:07:14.031 6856.074 - 6906.486: 46.0047% ( 523) 00:07:14.031 6906.486 - 6956.898: 49.4396% ( 521) 00:07:14.031 6956.898 - 7007.311: 52.9866% ( 538) 00:07:14.031 7007.311 - 7057.723: 56.4280% ( 522) 00:07:14.031 7057.723 - 7108.135: 59.5992% ( 481) 00:07:14.031 7108.135 - 7158.548: 62.4934% ( 439) 00:07:14.031 7158.548 - 7208.960: 64.7152% ( 337) 00:07:14.031 7208.960 - 7259.372: 66.0601% ( 204) 00:07:14.031 7259.372 - 7309.785: 67.0886% ( 156) 00:07:14.031 7309.785 - 7360.197: 67.9523% ( 131) 00:07:14.031 7360.197 - 7410.609: 68.7500% ( 121) 00:07:14.031 7410.609 - 7461.022: 69.4357% ( 104) 00:07:14.031 7461.022 - 7511.434: 70.1477% ( 108) 00:07:14.031 7511.434 - 7561.846: 70.8729% ( 110) 00:07:14.031 7561.846 - 7612.258: 71.4662% ( 90) 00:07:14.031 7612.258 - 7662.671: 72.0332% ( 86) 00:07:14.031 7662.671 - 7713.083: 72.5475% ( 78) 00:07:14.031 7713.083 - 7763.495: 73.0485% ( 76) 00:07:14.031 7763.495 - 7813.908: 73.4705% ( 64) 00:07:14.031 7813.908 - 7864.320: 73.9122% ( 67) 00:07:14.031 7864.320 - 7914.732: 74.3605% ( 68) 00:07:14.031 7914.732 - 7965.145: 74.7561% ( 60) 00:07:14.031 7965.145 - 8015.557: 75.2176% ( 70) 00:07:14.031 8015.557 - 8065.969: 75.6395% ( 64) 00:07:14.031 8065.969 - 8116.382: 76.0285% ( 59) 00:07:14.031 8116.382 - 8166.794: 76.3779% ( 53) 00:07:14.031 8166.794 - 8217.206: 76.7273% ( 53) 00:07:14.031 8217.206 - 8267.618: 77.0965% ( 56) 00:07:14.031 8267.618 - 8318.031: 77.3932% ( 45) 00:07:14.031 8318.031 - 8368.443: 77.7360% ( 52) 00:07:14.031 8368.443 - 8418.855: 78.0525% ( 48) 00:07:14.031 8418.855 - 8469.268: 78.3492% ( 45) 00:07:14.031 8469.268 - 8519.680: 78.5733% ( 34) 00:07:14.031 8519.680 - 8570.092: 78.7315% ( 24) 00:07:14.031 8570.092 - 8620.505: 78.9293% ( 30) 00:07:14.031 8620.505 - 8670.917: 79.1469% ( 33) 00:07:14.031 8670.917 - 8721.329: 79.3710% ( 34) 00:07:14.031 8721.329 - 8771.742: 79.5754% ( 31) 00:07:14.031 8771.742 - 8822.154: 79.8194% ( 37) 00:07:14.031 8822.154 - 8872.566: 80.0633% ( 37) 00:07:14.031 8872.566 - 8922.978: 80.2545% ( 29) 00:07:14.031 8922.978 - 8973.391: 80.4655% ( 32) 00:07:14.031 8973.391 - 9023.803: 80.6632% ( 30) 00:07:14.031 9023.803 - 9074.215: 80.8610% ( 30) 00:07:14.031 9074.215 - 9124.628: 81.1050% ( 37) 00:07:14.031 9124.628 - 9175.040: 81.3225% ( 33) 00:07:14.031 9175.040 - 9225.452: 81.5137% ( 29) 00:07:14.031 9225.452 - 9275.865: 81.7115% ( 30) 00:07:14.031 9275.865 - 9326.277: 81.9225% ( 32) 00:07:14.031 9326.277 - 9376.689: 82.1005% ( 27) 00:07:14.031 9376.689 - 9427.102: 82.2785% ( 27) 00:07:14.031 9427.102 - 9477.514: 82.4499% ( 26) 00:07:14.031 9477.514 - 9527.926: 82.5949% ( 22) 00:07:14.031 9527.926 - 9578.338: 82.7136% ( 18) 00:07:14.031 9578.338 - 9628.751: 82.8059% ( 14) 00:07:14.031 9628.751 - 9679.163: 82.9048% ( 15) 00:07:14.031 9679.163 - 9729.575: 82.9905% ( 13) 00:07:14.031 9729.575 - 9779.988: 83.0696% ( 12) 00:07:14.031 9779.988 - 9830.400: 83.1685% ( 15) 00:07:14.031 9830.400 - 9880.812: 83.2542% ( 13) 00:07:14.031 9880.812 - 9931.225: 83.3663% ( 17) 00:07:14.031 9931.225 - 9981.637: 83.4520% ( 13) 00:07:14.031 9981.637 - 10032.049: 83.5245% ( 11) 00:07:14.031 10032.049 - 10082.462: 83.6036% ( 12) 00:07:14.031 10082.462 - 10132.874: 83.6762% ( 11) 00:07:14.031 10132.874 - 10183.286: 83.7487% ( 11) 00:07:14.031 10183.286 - 10233.698: 83.8410% ( 14) 00:07:14.031 10233.698 - 10284.111: 83.9531% ( 17) 00:07:14.031 10284.111 - 10334.523: 84.0388% ( 13) 00:07:14.031 10334.523 - 10384.935: 84.1179% ( 12) 00:07:14.031 10384.935 - 10435.348: 84.1970% ( 12) 00:07:14.031 10435.348 - 10485.760: 84.2761% ( 12) 00:07:14.031 10485.760 - 10536.172: 84.3816% ( 16) 00:07:14.031 10536.172 - 10586.585: 84.4673% ( 13) 00:07:14.031 10586.585 - 10636.997: 84.5860% ( 18) 00:07:14.031 10636.997 - 10687.409: 84.7178% ( 20) 00:07:14.031 10687.409 - 10737.822: 84.8761% ( 24) 00:07:14.031 10737.822 - 10788.234: 84.9881% ( 17) 00:07:14.031 10788.234 - 10838.646: 85.1134% ( 19) 00:07:14.031 10838.646 - 10889.058: 85.2189% ( 16) 00:07:14.031 10889.058 - 10939.471: 85.3310% ( 17) 00:07:14.031 10939.471 - 10989.883: 85.4364% ( 16) 00:07:14.031 10989.883 - 11040.295: 85.5419% ( 16) 00:07:14.031 11040.295 - 11090.708: 85.6540% ( 17) 00:07:14.031 11090.708 - 11141.120: 85.7595% ( 16) 00:07:14.031 11141.120 - 11191.532: 85.8518% ( 14) 00:07:14.031 11191.532 - 11241.945: 85.9309% ( 12) 00:07:14.031 11241.945 - 11292.357: 86.0232% ( 14) 00:07:14.031 11292.357 - 11342.769: 86.1089% ( 13) 00:07:14.031 11342.769 - 11393.182: 86.1880% ( 12) 00:07:14.031 11393.182 - 11443.594: 86.2671% ( 12) 00:07:14.031 11443.594 - 11494.006: 86.3528% ( 13) 00:07:14.031 11494.006 - 11544.418: 86.3990% ( 7) 00:07:14.031 11544.418 - 11594.831: 86.4649% ( 10) 00:07:14.031 11594.831 - 11645.243: 86.5309% ( 10) 00:07:14.031 11645.243 - 11695.655: 86.5572% ( 4) 00:07:14.031 11695.655 - 11746.068: 86.5770% ( 3) 00:07:14.031 11746.068 - 11796.480: 86.5968% ( 3) 00:07:14.031 11796.480 - 11846.892: 86.6100% ( 2) 00:07:14.032 11846.892 - 11897.305: 86.6297% ( 3) 00:07:14.032 11897.305 - 11947.717: 86.6495% ( 3) 00:07:14.032 11947.717 - 11998.129: 86.6693% ( 3) 00:07:14.032 11998.129 - 12048.542: 86.6891% ( 3) 00:07:14.032 12048.542 - 12098.954: 86.7220% ( 5) 00:07:14.032 12098.954 - 12149.366: 86.7550% ( 5) 00:07:14.032 12149.366 - 12199.778: 86.8012% ( 7) 00:07:14.032 12199.778 - 12250.191: 86.8737% ( 11) 00:07:14.032 12250.191 - 12300.603: 86.9594% ( 13) 00:07:14.032 12300.603 - 12351.015: 87.0187% ( 9) 00:07:14.032 12351.015 - 12401.428: 87.0912% ( 11) 00:07:14.032 12401.428 - 12451.840: 87.1770% ( 13) 00:07:14.032 12451.840 - 12502.252: 87.2627% ( 13) 00:07:14.032 12502.252 - 12552.665: 87.3484% ( 13) 00:07:14.032 12552.665 - 12603.077: 87.4341% ( 13) 00:07:14.032 12603.077 - 12653.489: 87.5264% ( 14) 00:07:14.032 12653.489 - 12703.902: 87.6121% ( 13) 00:07:14.032 12703.902 - 12754.314: 87.6978% ( 13) 00:07:14.032 12754.314 - 12804.726: 87.7901% ( 14) 00:07:14.032 12804.726 - 12855.138: 87.8824% ( 14) 00:07:14.032 12855.138 - 12905.551: 87.9747% ( 14) 00:07:14.032 12905.551 - 13006.375: 88.1922% ( 33) 00:07:14.032 13006.375 - 13107.200: 88.4164% ( 34) 00:07:14.032 13107.200 - 13208.025: 88.6010% ( 28) 00:07:14.032 13208.025 - 13308.849: 88.7263% ( 19) 00:07:14.032 13308.849 - 13409.674: 88.9241% ( 30) 00:07:14.032 13409.674 - 13510.498: 89.1284% ( 31) 00:07:14.032 13510.498 - 13611.323: 89.3064% ( 27) 00:07:14.032 13611.323 - 13712.148: 89.4844% ( 27) 00:07:14.032 13712.148 - 13812.972: 89.6559% ( 26) 00:07:14.032 13812.972 - 13913.797: 89.8075% ( 23) 00:07:14.032 13913.797 - 14014.622: 89.9393% ( 20) 00:07:14.032 14014.622 - 14115.446: 90.0646% ( 19) 00:07:14.032 14115.446 - 14216.271: 90.1899% ( 19) 00:07:14.032 14216.271 - 14317.095: 90.3151% ( 19) 00:07:14.032 14317.095 - 14417.920: 90.4668% ( 23) 00:07:14.032 14417.920 - 14518.745: 90.5986% ( 20) 00:07:14.032 14518.745 - 14619.569: 90.6909% ( 14) 00:07:14.032 14619.569 - 14720.394: 90.8426% ( 23) 00:07:14.032 14720.394 - 14821.218: 91.0403% ( 30) 00:07:14.032 14821.218 - 14922.043: 91.2447% ( 31) 00:07:14.032 14922.043 - 15022.868: 91.4293% ( 28) 00:07:14.032 15022.868 - 15123.692: 91.6073% ( 27) 00:07:14.032 15123.692 - 15224.517: 91.7656% ( 24) 00:07:14.032 15224.517 - 15325.342: 91.9633% ( 30) 00:07:14.032 15325.342 - 15426.166: 92.1150% ( 23) 00:07:14.032 15426.166 - 15526.991: 92.2732% ( 24) 00:07:14.032 15526.991 - 15627.815: 92.4248% ( 23) 00:07:14.032 15627.815 - 15728.640: 92.5369% ( 17) 00:07:14.032 15728.640 - 15829.465: 92.6160% ( 12) 00:07:14.032 15829.465 - 15930.289: 92.6754% ( 9) 00:07:14.032 15930.289 - 16031.114: 92.7149% ( 6) 00:07:14.032 16031.114 - 16131.938: 92.7545% ( 6) 00:07:14.032 16131.938 - 16232.763: 92.7940% ( 6) 00:07:14.032 16232.763 - 16333.588: 92.8270% ( 5) 00:07:14.032 16535.237 - 16636.062: 92.8468% ( 3) 00:07:14.032 16636.062 - 16736.886: 92.8732% ( 4) 00:07:14.032 16736.886 - 16837.711: 92.9918% ( 18) 00:07:14.032 16837.711 - 16938.535: 93.0775% ( 13) 00:07:14.032 16938.535 - 17039.360: 93.1632% ( 13) 00:07:14.032 17039.360 - 17140.185: 93.2160% ( 8) 00:07:14.032 17140.185 - 17241.009: 93.3412% ( 19) 00:07:14.032 17241.009 - 17341.834: 93.4863% ( 22) 00:07:14.032 17341.834 - 17442.658: 93.6577% ( 26) 00:07:14.032 17442.658 - 17543.483: 93.8950% ( 36) 00:07:14.032 17543.483 - 17644.308: 94.1258% ( 35) 00:07:14.032 17644.308 - 17745.132: 94.3368% ( 32) 00:07:14.032 17745.132 - 17845.957: 94.6334% ( 45) 00:07:14.032 17845.957 - 17946.782: 94.9960% ( 55) 00:07:14.032 17946.782 - 18047.606: 95.4048% ( 62) 00:07:14.032 18047.606 - 18148.431: 95.7674% ( 55) 00:07:14.032 18148.431 - 18249.255: 96.1234% ( 54) 00:07:14.032 18249.255 - 18350.080: 96.4794% ( 54) 00:07:14.032 18350.080 - 18450.905: 96.8289% ( 53) 00:07:14.032 18450.905 - 18551.729: 97.1585% ( 50) 00:07:14.032 18551.729 - 18652.554: 97.4288% ( 41) 00:07:14.032 18652.554 - 18753.378: 97.6925% ( 40) 00:07:14.032 18753.378 - 18854.203: 97.9364% ( 37) 00:07:14.032 18854.203 - 18955.028: 98.1145% ( 27) 00:07:14.032 18955.028 - 19055.852: 98.2463% ( 20) 00:07:14.032 19055.852 - 19156.677: 98.3716% ( 19) 00:07:14.032 19156.677 - 19257.502: 98.4573% ( 13) 00:07:14.032 19257.502 - 19358.326: 98.5232% ( 10) 00:07:14.032 19358.326 - 19459.151: 98.5694% ( 7) 00:07:14.032 19459.151 - 19559.975: 98.6221% ( 8) 00:07:14.032 19559.975 - 19660.800: 98.6748% ( 8) 00:07:14.032 19660.800 - 19761.625: 98.7276% ( 8) 00:07:14.032 19761.625 - 19862.449: 98.7342% ( 1) 00:07:14.032 20164.923 - 20265.748: 98.7605% ( 4) 00:07:14.032 20265.748 - 20366.572: 98.7869% ( 4) 00:07:14.032 20366.572 - 20467.397: 98.8067% ( 3) 00:07:14.032 20467.397 - 20568.222: 98.8331% ( 4) 00:07:14.032 20568.222 - 20669.046: 98.9122% ( 12) 00:07:14.032 20669.046 - 20769.871: 98.9847% ( 11) 00:07:14.032 20769.871 - 20870.695: 99.0506% ( 10) 00:07:14.032 20870.695 - 20971.520: 99.1100% ( 9) 00:07:14.032 20971.520 - 21072.345: 99.1495% ( 6) 00:07:14.032 21072.345 - 21173.169: 99.2220% ( 11) 00:07:14.032 21173.169 - 21273.994: 99.3209% ( 15) 00:07:14.032 21273.994 - 21374.818: 99.4330% ( 17) 00:07:14.032 21374.818 - 21475.643: 99.5121% ( 12) 00:07:14.032 21475.643 - 21576.468: 99.6044% ( 14) 00:07:14.032 21576.468 - 21677.292: 99.7033% ( 15) 00:07:14.032 21677.292 - 21778.117: 99.7693% ( 10) 00:07:14.032 21778.117 - 21878.942: 99.8352% ( 10) 00:07:14.032 21878.942 - 21979.766: 99.8879% ( 8) 00:07:14.032 21979.766 - 22080.591: 99.9341% ( 7) 00:07:14.032 22080.591 - 22181.415: 99.9736% ( 6) 00:07:14.032 22181.415 - 22282.240: 100.0000% ( 4) 00:07:14.032 00:07:14.032 17:37:03 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:15.410 Initializing NVMe Controllers 00:07:15.410 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:15.410 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:15.410 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:15.410 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:15.410 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:15.410 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:15.410 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:15.410 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:15.410 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:15.410 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:15.410 Initialization complete. Launching workers. 00:07:15.410 ======================================================== 00:07:15.410 Latency(us) 00:07:15.410 Device Information : IOPS MiB/s Average min max 00:07:15.410 PCIE (0000:00:10.0) NSID 1 from core 0: 14666.56 171.87 8739.69 5447.56 40150.43 00:07:15.410 PCIE (0000:00:11.0) NSID 1 from core 0: 14666.56 171.87 8724.69 5684.00 38484.08 00:07:15.410 PCIE (0000:00:13.0) NSID 1 from core 0: 14666.56 171.87 8709.02 5718.53 38098.38 00:07:15.410 PCIE (0000:00:12.0) NSID 1 from core 0: 14666.56 171.87 8693.81 5854.56 36358.08 00:07:15.410 PCIE (0000:00:12.0) NSID 2 from core 0: 14666.56 171.87 8678.84 5696.90 34805.70 00:07:15.410 PCIE (0000:00:12.0) NSID 3 from core 0: 14730.32 172.62 8626.40 5676.14 26767.08 00:07:15.410 ======================================================== 00:07:15.410 Total : 88063.10 1031.99 8695.36 5447.56 40150.43 00:07:15.410 00:07:15.410 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:15.410 ================================================================================= 00:07:15.410 1.00000% : 6125.095us 00:07:15.410 10.00000% : 6654.425us 00:07:15.410 25.00000% : 6906.486us 00:07:15.410 50.00000% : 7360.197us 00:07:15.410 75.00000% : 9527.926us 00:07:15.410 90.00000% : 13107.200us 00:07:15.410 95.00000% : 14518.745us 00:07:15.410 98.00000% : 15930.289us 00:07:15.410 99.00000% : 16837.711us 00:07:15.410 99.50000% : 32263.877us 00:07:15.410 99.90000% : 39724.898us 00:07:15.410 99.99000% : 40128.197us 00:07:15.410 99.99900% : 40329.846us 00:07:15.410 99.99990% : 40329.846us 00:07:15.410 99.99999% : 40329.846us 00:07:15.410 00:07:15.410 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:15.410 ================================================================================= 00:07:15.410 1.00000% : 6225.920us 00:07:15.410 10.00000% : 6704.837us 00:07:15.410 25.00000% : 6906.486us 00:07:15.410 50.00000% : 7259.372us 00:07:15.410 75.00000% : 9679.163us 00:07:15.410 90.00000% : 12905.551us 00:07:15.410 95.00000% : 14317.095us 00:07:15.410 98.00000% : 15829.465us 00:07:15.410 99.00000% : 16837.711us 00:07:15.410 99.50000% : 30247.385us 00:07:15.410 99.90000% : 38111.705us 00:07:15.410 99.99000% : 38515.003us 00:07:15.410 99.99900% : 38515.003us 00:07:15.410 99.99990% : 38515.003us 00:07:15.410 99.99999% : 38515.003us 00:07:15.410 00:07:15.410 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:15.410 ================================================================================= 00:07:15.410 1.00000% : 6200.714us 00:07:15.410 10.00000% : 6704.837us 00:07:15.410 25.00000% : 6906.486us 00:07:15.410 50.00000% : 7309.785us 00:07:15.410 75.00000% : 9527.926us 00:07:15.410 90.00000% : 12653.489us 00:07:15.410 95.00000% : 14417.920us 00:07:15.410 98.00000% : 15627.815us 00:07:15.410 99.00000% : 17241.009us 00:07:15.410 99.50000% : 29239.138us 00:07:15.410 99.90000% : 37708.406us 00:07:15.410 99.99000% : 38111.705us 00:07:15.410 99.99900% : 38111.705us 00:07:15.410 99.99990% : 38111.705us 00:07:15.410 99.99999% : 38111.705us 00:07:15.410 00:07:15.410 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:15.410 ================================================================================= 00:07:15.410 1.00000% : 6225.920us 00:07:15.410 10.00000% : 6704.837us 00:07:15.410 25.00000% : 6906.486us 00:07:15.410 50.00000% : 7309.785us 00:07:15.410 75.00000% : 9477.514us 00:07:15.410 90.00000% : 12603.077us 00:07:15.410 95.00000% : 14518.745us 00:07:15.410 98.00000% : 15930.289us 00:07:15.410 99.00000% : 16938.535us 00:07:15.410 99.50000% : 28432.542us 00:07:15.410 99.90000% : 36095.212us 00:07:15.410 99.99000% : 36498.511us 00:07:15.410 99.99900% : 36498.511us 00:07:15.410 99.99990% : 36498.511us 00:07:15.410 99.99999% : 36498.511us 00:07:15.410 00:07:15.410 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:15.410 ================================================================================= 00:07:15.410 1.00000% : 6200.714us 00:07:15.410 10.00000% : 6704.837us 00:07:15.410 25.00000% : 6906.486us 00:07:15.410 50.00000% : 7259.372us 00:07:15.410 75.00000% : 9427.102us 00:07:15.410 90.00000% : 12653.489us 00:07:15.410 95.00000% : 14518.745us 00:07:15.410 98.00000% : 15930.289us 00:07:15.410 99.00000% : 17341.834us 00:07:15.410 99.50000% : 26617.698us 00:07:15.410 99.90000% : 34482.018us 00:07:15.410 99.99000% : 34885.317us 00:07:15.410 99.99900% : 34885.317us 00:07:15.411 99.99990% : 34885.317us 00:07:15.411 99.99999% : 34885.317us 00:07:15.411 00:07:15.411 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:15.411 ================================================================================= 00:07:15.411 1.00000% : 6175.508us 00:07:15.411 10.00000% : 6704.837us 00:07:15.411 25.00000% : 6906.486us 00:07:15.411 50.00000% : 7309.785us 00:07:15.411 75.00000% : 9578.338us 00:07:15.411 90.00000% : 12905.551us 00:07:15.411 95.00000% : 14619.569us 00:07:15.411 98.00000% : 15728.640us 00:07:15.411 99.00000% : 16837.711us 00:07:15.411 99.50000% : 18854.203us 00:07:15.411 99.90000% : 26416.049us 00:07:15.411 99.99000% : 26819.348us 00:07:15.411 99.99900% : 26819.348us 00:07:15.411 99.99990% : 26819.348us 00:07:15.411 99.99999% : 26819.348us 00:07:15.411 00:07:15.411 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:15.411 ============================================================================== 00:07:15.411 Range in us Cumulative IO count 00:07:15.411 5444.529 - 5469.735: 0.0136% ( 2) 00:07:15.411 5469.735 - 5494.942: 0.0204% ( 1) 00:07:15.411 5494.942 - 5520.148: 0.0272% ( 1) 00:07:15.411 5520.148 - 5545.354: 0.0408% ( 2) 00:07:15.411 5545.354 - 5570.560: 0.0476% ( 1) 00:07:15.411 5570.560 - 5595.766: 0.0543% ( 1) 00:07:15.411 5595.766 - 5620.972: 0.0679% ( 2) 00:07:15.411 5620.972 - 5646.178: 0.0747% ( 1) 00:07:15.411 5646.178 - 5671.385: 0.0815% ( 1) 00:07:15.411 5671.385 - 5696.591: 0.0951% ( 2) 00:07:15.411 5696.591 - 5721.797: 0.1019% ( 1) 00:07:15.411 5721.797 - 5747.003: 0.1087% ( 1) 00:07:15.411 5747.003 - 5772.209: 0.1223% ( 2) 00:07:15.411 5772.209 - 5797.415: 0.1291% ( 1) 00:07:15.411 5797.415 - 5822.622: 0.1495% ( 3) 00:07:15.411 5822.622 - 5847.828: 0.1834% ( 5) 00:07:15.411 5847.828 - 5873.034: 0.2106% ( 4) 00:07:15.411 5873.034 - 5898.240: 0.2853% ( 11) 00:07:15.411 5898.240 - 5923.446: 0.3601% ( 11) 00:07:15.411 5923.446 - 5948.652: 0.4416% ( 12) 00:07:15.411 5948.652 - 5973.858: 0.4891% ( 7) 00:07:15.411 5973.858 - 5999.065: 0.5503% ( 9) 00:07:15.411 5999.065 - 6024.271: 0.6522% ( 15) 00:07:15.411 6024.271 - 6049.477: 0.7405% ( 13) 00:07:15.411 6049.477 - 6074.683: 0.8696% ( 19) 00:07:15.411 6074.683 - 6099.889: 0.9443% ( 11) 00:07:15.411 6099.889 - 6125.095: 1.0394% ( 14) 00:07:15.411 6125.095 - 6150.302: 1.1617% ( 18) 00:07:15.411 6150.302 - 6175.508: 1.2364% ( 11) 00:07:15.411 6175.508 - 6200.714: 1.3383% ( 15) 00:07:15.411 6200.714 - 6225.920: 1.4946% ( 23) 00:07:15.411 6225.920 - 6251.126: 1.6372% ( 21) 00:07:15.411 6251.126 - 6276.332: 1.8207% ( 27) 00:07:15.411 6276.332 - 6301.538: 2.0448% ( 33) 00:07:15.411 6301.538 - 6326.745: 2.3438% ( 44) 00:07:15.411 6326.745 - 6351.951: 2.7582% ( 61) 00:07:15.411 6351.951 - 6377.157: 3.2541% ( 73) 00:07:15.411 6377.157 - 6402.363: 3.7568% ( 74) 00:07:15.411 6402.363 - 6427.569: 4.2663% ( 75) 00:07:15.411 6427.569 - 6452.775: 4.8845% ( 91) 00:07:15.411 6452.775 - 6503.188: 6.3995% ( 223) 00:07:15.411 6503.188 - 6553.600: 7.7242% ( 195) 00:07:15.411 6553.600 - 6604.012: 9.4497% ( 254) 00:07:15.411 6604.012 - 6654.425: 11.8139% ( 348) 00:07:15.411 6654.425 - 6704.837: 14.3546% ( 374) 00:07:15.411 6704.837 - 6755.249: 17.3777% ( 445) 00:07:15.411 6755.249 - 6805.662: 20.7201% ( 492) 00:07:15.411 6805.662 - 6856.074: 24.4429% ( 548) 00:07:15.411 6856.074 - 6906.486: 27.9280% ( 513) 00:07:15.411 6906.486 - 6956.898: 31.4198% ( 514) 00:07:15.411 6956.898 - 7007.311: 34.8981% ( 512) 00:07:15.411 7007.311 - 7057.723: 38.3152% ( 503) 00:07:15.411 7057.723 - 7108.135: 40.8356% ( 371) 00:07:15.411 7108.135 - 7158.548: 43.4375% ( 383) 00:07:15.411 7158.548 - 7208.960: 45.9783% ( 374) 00:07:15.411 7208.960 - 7259.372: 48.0163% ( 300) 00:07:15.411 7259.372 - 7309.785: 49.8166% ( 265) 00:07:15.411 7309.785 - 7360.197: 51.7052% ( 278) 00:07:15.411 7360.197 - 7410.609: 53.4103% ( 251) 00:07:15.411 7410.609 - 7461.022: 54.9253% ( 223) 00:07:15.411 7461.022 - 7511.434: 56.2568% ( 196) 00:07:15.411 7511.434 - 7561.846: 57.4185% ( 171) 00:07:15.411 7561.846 - 7612.258: 58.4443% ( 151) 00:07:15.411 7612.258 - 7662.671: 59.2120% ( 113) 00:07:15.411 7662.671 - 7713.083: 59.9728% ( 112) 00:07:15.411 7713.083 - 7763.495: 60.6182% ( 95) 00:07:15.411 7763.495 - 7813.908: 61.1481% ( 78) 00:07:15.411 7813.908 - 7864.320: 61.6848% ( 79) 00:07:15.411 7864.320 - 7914.732: 62.2079% ( 77) 00:07:15.411 7914.732 - 7965.145: 62.7310% ( 77) 00:07:15.411 7965.145 - 8015.557: 63.2201% ( 72) 00:07:15.411 8015.557 - 8065.969: 63.7908% ( 84) 00:07:15.411 8065.969 - 8116.382: 64.1780% ( 57) 00:07:15.411 8116.382 - 8166.794: 64.4701% ( 43) 00:07:15.411 8166.794 - 8217.206: 64.9524% ( 71) 00:07:15.411 8217.206 - 8267.618: 65.6046% ( 96) 00:07:15.411 8267.618 - 8318.031: 66.0326% ( 63) 00:07:15.411 8318.031 - 8368.443: 66.4742% ( 65) 00:07:15.411 8368.443 - 8418.855: 66.8546% ( 56) 00:07:15.411 8418.855 - 8469.268: 67.2147% ( 53) 00:07:15.411 8469.268 - 8519.680: 67.4728% ( 38) 00:07:15.411 8519.680 - 8570.092: 67.8601% ( 57) 00:07:15.411 8570.092 - 8620.505: 68.3492% ( 72) 00:07:15.411 8620.505 - 8670.917: 68.7432% ( 58) 00:07:15.411 8670.917 - 8721.329: 69.2188% ( 70) 00:07:15.411 8721.329 - 8771.742: 69.5584% ( 50) 00:07:15.411 8771.742 - 8822.154: 69.7758% ( 32) 00:07:15.411 8822.154 - 8872.566: 70.0068% ( 34) 00:07:15.411 8872.566 - 8922.978: 70.4755% ( 69) 00:07:15.411 8922.978 - 8973.391: 70.9035% ( 63) 00:07:15.411 8973.391 - 9023.803: 71.1685% ( 39) 00:07:15.411 9023.803 - 9074.215: 71.5149% ( 51) 00:07:15.411 9074.215 - 9124.628: 71.8478% ( 49) 00:07:15.411 9124.628 - 9175.040: 72.4321% ( 86) 00:07:15.411 9175.040 - 9225.452: 72.8329% ( 59) 00:07:15.411 9225.452 - 9275.865: 73.1250% ( 43) 00:07:15.411 9275.865 - 9326.277: 73.5530% ( 63) 00:07:15.411 9326.277 - 9376.689: 73.9402% ( 57) 00:07:15.411 9376.689 - 9427.102: 74.3478% ( 60) 00:07:15.411 9427.102 - 9477.514: 74.7079% ( 53) 00:07:15.411 9477.514 - 9527.926: 75.0204% ( 46) 00:07:15.411 9527.926 - 9578.338: 75.2649% ( 36) 00:07:15.411 9578.338 - 9628.751: 75.4755% ( 31) 00:07:15.411 9628.751 - 9679.163: 75.7065% ( 34) 00:07:15.411 9679.163 - 9729.575: 75.9307% ( 33) 00:07:15.411 9729.575 - 9779.988: 76.2772% ( 51) 00:07:15.411 9779.988 - 9830.400: 76.6780% ( 59) 00:07:15.411 9830.400 - 9880.812: 76.9701% ( 43) 00:07:15.411 9880.812 - 9931.225: 77.3709% ( 59) 00:07:15.411 9931.225 - 9981.637: 77.6834% ( 46) 00:07:15.411 9981.637 - 10032.049: 77.9755% ( 43) 00:07:15.411 10032.049 - 10082.462: 78.3084% ( 49) 00:07:15.411 10082.462 - 10132.874: 78.5938% ( 42) 00:07:15.411 10132.874 - 10183.286: 78.9198% ( 48) 00:07:15.411 10183.286 - 10233.698: 79.2527% ( 49) 00:07:15.411 10233.698 - 10284.111: 79.5992% ( 51) 00:07:15.411 10284.111 - 10334.523: 79.9728% ( 55) 00:07:15.411 10334.523 - 10384.935: 80.2582% ( 42) 00:07:15.411 10384.935 - 10435.348: 80.5707% ( 46) 00:07:15.411 10435.348 - 10485.760: 80.8356% ( 39) 00:07:15.411 10485.760 - 10536.172: 81.0598% ( 33) 00:07:15.411 10536.172 - 10586.585: 81.3315% ( 40) 00:07:15.411 10586.585 - 10636.997: 81.6304% ( 44) 00:07:15.411 10636.997 - 10687.409: 81.9226% ( 43) 00:07:15.411 10687.409 - 10737.822: 82.1467% ( 33) 00:07:15.411 10737.822 - 10788.234: 82.3981% ( 37) 00:07:15.411 10788.234 - 10838.646: 82.6223% ( 33) 00:07:15.411 10838.646 - 10889.058: 82.8465% ( 33) 00:07:15.411 10889.058 - 10939.471: 83.0503% ( 30) 00:07:15.411 10939.471 - 10989.883: 83.2269% ( 26) 00:07:15.411 10989.883 - 11040.295: 83.3764% ( 22) 00:07:15.411 11040.295 - 11090.708: 83.4986% ( 18) 00:07:15.411 11090.708 - 11141.120: 83.6005% ( 15) 00:07:15.411 11141.120 - 11191.532: 83.7160% ( 17) 00:07:15.411 11191.532 - 11241.945: 83.7976% ( 12) 00:07:15.411 11241.945 - 11292.357: 83.9062% ( 16) 00:07:15.411 11292.357 - 11342.769: 84.1033% ( 29) 00:07:15.411 11342.769 - 11393.182: 84.2799% ( 26) 00:07:15.411 11393.182 - 11443.594: 84.4497% ( 25) 00:07:15.411 11443.594 - 11494.006: 84.6196% ( 25) 00:07:15.411 11494.006 - 11544.418: 84.7351% ( 17) 00:07:15.411 11544.418 - 11594.831: 84.8438% ( 16) 00:07:15.411 11594.831 - 11645.243: 84.9932% ( 22) 00:07:15.411 11645.243 - 11695.655: 85.1902% ( 29) 00:07:15.411 11695.655 - 11746.068: 85.2853% ( 14) 00:07:15.411 11746.068 - 11796.480: 85.3668% ( 12) 00:07:15.411 11796.480 - 11846.892: 85.4416% ( 11) 00:07:15.411 11846.892 - 11897.305: 85.6250% ( 27) 00:07:15.411 11897.305 - 11947.717: 85.8084% ( 27) 00:07:15.411 11947.717 - 11998.129: 85.9918% ( 27) 00:07:15.411 11998.129 - 12048.542: 86.2568% ( 39) 00:07:15.411 12048.542 - 12098.954: 86.4946% ( 35) 00:07:15.411 12098.954 - 12149.366: 86.7052% ( 31) 00:07:15.411 12149.366 - 12199.778: 86.8410% ( 20) 00:07:15.411 12199.778 - 12250.191: 86.9769% ( 20) 00:07:15.411 12250.191 - 12300.603: 87.1739% ( 29) 00:07:15.411 12300.603 - 12351.015: 87.3166% ( 21) 00:07:15.411 12351.015 - 12401.428: 87.5136% ( 29) 00:07:15.411 12401.428 - 12451.840: 87.6834% ( 25) 00:07:15.411 12451.840 - 12502.252: 87.9076% ( 33) 00:07:15.411 12502.252 - 12552.665: 88.0435% ( 20) 00:07:15.411 12552.665 - 12603.077: 88.1114% ( 10) 00:07:15.411 12603.077 - 12653.489: 88.2541% ( 21) 00:07:15.411 12653.489 - 12703.902: 88.4171% ( 24) 00:07:15.411 12703.902 - 12754.314: 88.5734% ( 23) 00:07:15.411 12754.314 - 12804.726: 88.7432% ( 25) 00:07:15.411 12804.726 - 12855.138: 88.9470% ( 30) 00:07:15.411 12855.138 - 12905.551: 89.1916% ( 36) 00:07:15.411 12905.551 - 13006.375: 89.8030% ( 90) 00:07:15.411 13006.375 - 13107.200: 90.1902% ( 57) 00:07:15.411 13107.200 - 13208.025: 90.8016% ( 90) 00:07:15.411 13208.025 - 13308.849: 91.3315% ( 78) 00:07:15.411 13308.849 - 13409.674: 91.7120% ( 56) 00:07:15.411 13409.674 - 13510.498: 92.0312% ( 47) 00:07:15.412 13510.498 - 13611.323: 92.2554% ( 33) 00:07:15.412 13611.323 - 13712.148: 92.6223% ( 54) 00:07:15.412 13712.148 - 13812.972: 93.0707% ( 66) 00:07:15.412 13812.972 - 13913.797: 93.4851% ( 61) 00:07:15.412 13913.797 - 14014.622: 93.7908% ( 45) 00:07:15.412 14014.622 - 14115.446: 93.9606% ( 25) 00:07:15.412 14115.446 - 14216.271: 94.2595% ( 44) 00:07:15.412 14216.271 - 14317.095: 94.5856% ( 48) 00:07:15.412 14317.095 - 14417.920: 94.9253% ( 50) 00:07:15.412 14417.920 - 14518.745: 95.1087% ( 27) 00:07:15.412 14518.745 - 14619.569: 95.3940% ( 42) 00:07:15.412 14619.569 - 14720.394: 95.6114% ( 32) 00:07:15.412 14720.394 - 14821.218: 95.8424% ( 34) 00:07:15.412 14821.218 - 14922.043: 96.2296% ( 57) 00:07:15.412 14922.043 - 15022.868: 96.4198% ( 28) 00:07:15.412 15022.868 - 15123.692: 96.6576% ( 35) 00:07:15.412 15123.692 - 15224.517: 96.8682% ( 31) 00:07:15.412 15224.517 - 15325.342: 97.0924% ( 33) 00:07:15.412 15325.342 - 15426.166: 97.3370% ( 36) 00:07:15.412 15426.166 - 15526.991: 97.4864% ( 22) 00:07:15.412 15526.991 - 15627.815: 97.6291% ( 21) 00:07:15.412 15627.815 - 15728.640: 97.7446% ( 17) 00:07:15.412 15728.640 - 15829.465: 97.8736% ( 19) 00:07:15.412 15829.465 - 15930.289: 98.1114% ( 35) 00:07:15.412 15930.289 - 16031.114: 98.3220% ( 31) 00:07:15.412 16031.114 - 16131.938: 98.4783% ( 23) 00:07:15.412 16131.938 - 16232.763: 98.5598% ( 12) 00:07:15.412 16232.763 - 16333.588: 98.6889% ( 19) 00:07:15.412 16333.588 - 16434.412: 98.7500% ( 9) 00:07:15.412 16434.412 - 16535.237: 98.8247% ( 11) 00:07:15.412 16535.237 - 16636.062: 98.8859% ( 9) 00:07:15.412 16636.062 - 16736.886: 98.9606% ( 11) 00:07:15.412 16736.886 - 16837.711: 99.0557% ( 14) 00:07:15.412 16837.711 - 16938.535: 99.0965% ( 6) 00:07:15.412 16938.535 - 17039.360: 99.1304% ( 5) 00:07:15.412 30650.683 - 30852.332: 99.2052% ( 11) 00:07:15.412 30852.332 - 31053.982: 99.2799% ( 11) 00:07:15.412 31053.982 - 31255.631: 99.3071% ( 4) 00:07:15.412 31255.631 - 31457.280: 99.3478% ( 6) 00:07:15.412 31457.280 - 31658.929: 99.3954% ( 7) 00:07:15.412 31658.929 - 31860.578: 99.4429% ( 7) 00:07:15.412 31860.578 - 32062.228: 99.4905% ( 7) 00:07:15.412 32062.228 - 32263.877: 99.5380% ( 7) 00:07:15.412 32263.877 - 32465.526: 99.5652% ( 4) 00:07:15.412 38111.705 - 38313.354: 99.5788% ( 2) 00:07:15.412 38313.354 - 38515.003: 99.6332% ( 8) 00:07:15.412 38515.003 - 38716.652: 99.6739% ( 6) 00:07:15.412 38716.652 - 38918.302: 99.7215% ( 7) 00:07:15.412 38918.302 - 39119.951: 99.7690% ( 7) 00:07:15.412 39119.951 - 39321.600: 99.8098% ( 6) 00:07:15.412 39321.600 - 39523.249: 99.8641% ( 8) 00:07:15.412 39523.249 - 39724.898: 99.9049% ( 6) 00:07:15.412 39724.898 - 39926.548: 99.9524% ( 7) 00:07:15.412 39926.548 - 40128.197: 99.9932% ( 6) 00:07:15.412 40128.197 - 40329.846: 100.0000% ( 1) 00:07:15.412 00:07:15.412 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:15.412 ============================================================================== 00:07:15.412 Range in us Cumulative IO count 00:07:15.412 5671.385 - 5696.591: 0.0068% ( 1) 00:07:15.412 5696.591 - 5721.797: 0.0340% ( 4) 00:07:15.412 5721.797 - 5747.003: 0.0476% ( 2) 00:07:15.412 5747.003 - 5772.209: 0.0679% ( 3) 00:07:15.412 5772.209 - 5797.415: 0.1019% ( 5) 00:07:15.412 5797.415 - 5822.622: 0.1427% ( 6) 00:07:15.412 5822.622 - 5847.828: 0.3261% ( 27) 00:07:15.412 5847.828 - 5873.034: 0.3533% ( 4) 00:07:15.412 5873.034 - 5898.240: 0.3736% ( 3) 00:07:15.412 5898.240 - 5923.446: 0.3872% ( 2) 00:07:15.412 5923.446 - 5948.652: 0.4348% ( 7) 00:07:15.412 5948.652 - 5973.858: 0.4891% ( 8) 00:07:15.412 5973.858 - 5999.065: 0.7812% ( 43) 00:07:15.412 5999.065 - 6024.271: 0.8288% ( 7) 00:07:15.412 6024.271 - 6049.477: 0.8628% ( 5) 00:07:15.412 6049.477 - 6074.683: 0.8764% ( 2) 00:07:15.412 6074.683 - 6099.889: 0.8899% ( 2) 00:07:15.412 6099.889 - 6125.095: 0.9035% ( 2) 00:07:15.412 6125.095 - 6150.302: 0.9171% ( 2) 00:07:15.412 6150.302 - 6175.508: 0.9443% ( 4) 00:07:15.412 6175.508 - 6200.714: 0.9783% ( 5) 00:07:15.412 6200.714 - 6225.920: 1.0190% ( 6) 00:07:15.412 6225.920 - 6251.126: 1.0666% ( 7) 00:07:15.412 6251.126 - 6276.332: 1.1549% ( 13) 00:07:15.412 6276.332 - 6301.538: 1.5014% ( 51) 00:07:15.412 6301.538 - 6326.745: 1.7799% ( 41) 00:07:15.412 6326.745 - 6351.951: 1.8818% ( 15) 00:07:15.412 6351.951 - 6377.157: 2.0992% ( 32) 00:07:15.412 6377.157 - 6402.363: 2.6359% ( 79) 00:07:15.412 6402.363 - 6427.569: 2.9959% ( 53) 00:07:15.412 6427.569 - 6452.775: 3.2541% ( 38) 00:07:15.412 6452.775 - 6503.188: 3.7908% ( 79) 00:07:15.412 6503.188 - 6553.600: 4.6535% ( 127) 00:07:15.412 6553.600 - 6604.012: 6.6916% ( 300) 00:07:15.412 6604.012 - 6654.425: 9.0082% ( 341) 00:07:15.412 6654.425 - 6704.837: 11.4470% ( 359) 00:07:15.412 6704.837 - 6755.249: 14.2663% ( 415) 00:07:15.412 6755.249 - 6805.662: 19.0217% ( 700) 00:07:15.412 6805.662 - 6856.074: 23.8247% ( 707) 00:07:15.412 6856.074 - 6906.486: 27.8736% ( 596) 00:07:15.412 6906.486 - 6956.898: 31.8886% ( 591) 00:07:15.412 6956.898 - 7007.311: 36.0190% ( 608) 00:07:15.412 7007.311 - 7057.723: 38.8111% ( 411) 00:07:15.412 7057.723 - 7108.135: 41.8886% ( 453) 00:07:15.412 7108.135 - 7158.548: 46.2976% ( 649) 00:07:15.412 7158.548 - 7208.960: 49.3342% ( 447) 00:07:15.412 7208.960 - 7259.372: 50.8560% ( 224) 00:07:15.412 7259.372 - 7309.785: 52.3234% ( 216) 00:07:15.412 7309.785 - 7360.197: 53.7908% ( 216) 00:07:15.412 7360.197 - 7410.609: 54.8302% ( 153) 00:07:15.412 7410.609 - 7461.022: 56.3383% ( 222) 00:07:15.412 7461.022 - 7511.434: 57.0109% ( 99) 00:07:15.412 7511.434 - 7561.846: 57.8193% ( 119) 00:07:15.412 7561.846 - 7612.258: 58.7228% ( 133) 00:07:15.412 7612.258 - 7662.671: 59.5312% ( 119) 00:07:15.412 7662.671 - 7713.083: 60.3465% ( 120) 00:07:15.412 7713.083 - 7763.495: 61.2296% ( 130) 00:07:15.412 7763.495 - 7813.908: 61.8071% ( 85) 00:07:15.412 7813.908 - 7864.320: 62.2283% ( 62) 00:07:15.412 7864.320 - 7914.732: 62.6562% ( 63) 00:07:15.412 7914.732 - 7965.145: 63.1046% ( 66) 00:07:15.412 7965.145 - 8015.557: 63.5462% ( 65) 00:07:15.412 8015.557 - 8065.969: 64.1916% ( 95) 00:07:15.412 8065.969 - 8116.382: 64.6060% ( 61) 00:07:15.412 8116.382 - 8166.794: 64.8438% ( 35) 00:07:15.412 8166.794 - 8217.206: 65.1155% ( 40) 00:07:15.412 8217.206 - 8267.618: 65.3940% ( 41) 00:07:15.412 8267.618 - 8318.031: 65.6114% ( 32) 00:07:15.412 8318.031 - 8368.443: 65.9443% ( 49) 00:07:15.412 8368.443 - 8418.855: 66.3043% ( 53) 00:07:15.412 8418.855 - 8469.268: 66.8410% ( 79) 00:07:15.412 8469.268 - 8519.680: 67.3709% ( 78) 00:07:15.412 8519.680 - 8570.092: 67.8736% ( 74) 00:07:15.412 8570.092 - 8620.505: 68.4375% ( 83) 00:07:15.412 8620.505 - 8670.917: 68.9810% ( 80) 00:07:15.412 8670.917 - 8721.329: 69.3750% ( 58) 00:07:15.412 8721.329 - 8771.742: 69.9660% ( 87) 00:07:15.412 8771.742 - 8822.154: 70.2989% ( 49) 00:07:15.412 8822.154 - 8872.566: 70.6114% ( 46) 00:07:15.412 8872.566 - 8922.978: 71.0598% ( 66) 00:07:15.412 8922.978 - 8973.391: 71.4198% ( 53) 00:07:15.412 8973.391 - 9023.803: 71.7255% ( 45) 00:07:15.412 9023.803 - 9074.215: 72.1196% ( 58) 00:07:15.412 9074.215 - 9124.628: 72.2962% ( 26) 00:07:15.412 9124.628 - 9175.040: 72.4389% ( 21) 00:07:15.412 9175.040 - 9225.452: 72.5747% ( 20) 00:07:15.412 9225.452 - 9275.865: 72.7310% ( 23) 00:07:15.412 9275.865 - 9326.277: 72.9891% ( 38) 00:07:15.412 9326.277 - 9376.689: 73.2541% ( 39) 00:07:15.412 9376.689 - 9427.102: 73.4035% ( 22) 00:07:15.412 9427.102 - 9477.514: 73.6481% ( 36) 00:07:15.412 9477.514 - 9527.926: 73.9742% ( 48) 00:07:15.412 9527.926 - 9578.338: 74.3274% ( 52) 00:07:15.412 9578.338 - 9628.751: 74.7554% ( 63) 00:07:15.412 9628.751 - 9679.163: 75.0951% ( 50) 00:07:15.412 9679.163 - 9729.575: 75.3533% ( 38) 00:07:15.412 9729.575 - 9779.988: 75.5707% ( 32) 00:07:15.412 9779.988 - 9830.400: 75.8832% ( 46) 00:07:15.412 9830.400 - 9880.812: 76.0666% ( 27) 00:07:15.412 9880.812 - 9931.225: 76.2432% ( 26) 00:07:15.412 9931.225 - 9981.637: 76.4606% ( 32) 00:07:15.412 9981.637 - 10032.049: 76.8071% ( 51) 00:07:15.412 10032.049 - 10082.462: 77.3030% ( 73) 00:07:15.412 10082.462 - 10132.874: 77.6698% ( 54) 00:07:15.412 10132.874 - 10183.286: 78.0367% ( 54) 00:07:15.412 10183.286 - 10233.698: 78.4171% ( 56) 00:07:15.412 10233.698 - 10284.111: 78.7432% ( 48) 00:07:15.412 10284.111 - 10334.523: 79.0761% ( 49) 00:07:15.412 10334.523 - 10384.935: 79.3886% ( 46) 00:07:15.412 10384.935 - 10435.348: 79.7283% ( 50) 00:07:15.412 10435.348 - 10485.760: 80.1630% ( 64) 00:07:15.412 10485.760 - 10536.172: 80.5231% ( 53) 00:07:15.412 10536.172 - 10586.585: 80.9375% ( 61) 00:07:15.412 10586.585 - 10636.997: 81.3791% ( 65) 00:07:15.412 10636.997 - 10687.409: 81.6576% ( 41) 00:07:15.412 10687.409 - 10737.822: 81.8750% ( 32) 00:07:15.412 10737.822 - 10788.234: 82.1128% ( 35) 00:07:15.412 10788.234 - 10838.646: 82.4864% ( 55) 00:07:15.412 10838.646 - 10889.058: 82.7785% ( 43) 00:07:15.412 10889.058 - 10939.471: 83.0027% ( 33) 00:07:15.412 10939.471 - 10989.883: 83.2065% ( 30) 00:07:15.412 10989.883 - 11040.295: 83.3764% ( 25) 00:07:15.412 11040.295 - 11090.708: 83.5938% ( 32) 00:07:15.413 11090.708 - 11141.120: 83.8043% ( 31) 00:07:15.413 11141.120 - 11191.532: 84.0829% ( 41) 00:07:15.413 11191.532 - 11241.945: 84.1644% ( 12) 00:07:15.413 11241.945 - 11292.357: 84.2323% ( 10) 00:07:15.413 11292.357 - 11342.769: 84.2935% ( 9) 00:07:15.413 11342.769 - 11393.182: 84.3342% ( 6) 00:07:15.413 11393.182 - 11443.594: 84.3478% ( 2) 00:07:15.413 11443.594 - 11494.006: 84.3614% ( 2) 00:07:15.413 11494.006 - 11544.418: 84.3886% ( 4) 00:07:15.413 11544.418 - 11594.831: 84.4293% ( 6) 00:07:15.413 11594.831 - 11645.243: 84.4905% ( 9) 00:07:15.413 11645.243 - 11695.655: 84.6128% ( 18) 00:07:15.413 11695.655 - 11746.068: 84.7283% ( 17) 00:07:15.413 11746.068 - 11796.480: 84.8438% ( 17) 00:07:15.413 11796.480 - 11846.892: 85.0679% ( 33) 00:07:15.413 11846.892 - 11897.305: 85.3057% ( 35) 00:07:15.413 11897.305 - 11947.717: 85.5503% ( 36) 00:07:15.413 11947.717 - 11998.129: 85.7677% ( 32) 00:07:15.413 11998.129 - 12048.542: 85.9511% ( 27) 00:07:15.413 12048.542 - 12098.954: 86.1957% ( 36) 00:07:15.413 12098.954 - 12149.366: 86.4266% ( 34) 00:07:15.413 12149.366 - 12199.778: 86.6848% ( 38) 00:07:15.413 12199.778 - 12250.191: 86.8682% ( 27) 00:07:15.413 12250.191 - 12300.603: 87.0992% ( 34) 00:07:15.413 12300.603 - 12351.015: 87.3505% ( 37) 00:07:15.413 12351.015 - 12401.428: 87.6495% ( 44) 00:07:15.413 12401.428 - 12451.840: 87.9348% ( 42) 00:07:15.413 12451.840 - 12502.252: 88.3084% ( 55) 00:07:15.413 12502.252 - 12552.665: 88.6073% ( 44) 00:07:15.413 12552.665 - 12603.077: 88.9130% ( 45) 00:07:15.413 12603.077 - 12653.489: 89.1576% ( 36) 00:07:15.413 12653.489 - 12703.902: 89.3682% ( 31) 00:07:15.413 12703.902 - 12754.314: 89.5380% ( 25) 00:07:15.413 12754.314 - 12804.726: 89.7079% ( 25) 00:07:15.413 12804.726 - 12855.138: 89.9117% ( 30) 00:07:15.413 12855.138 - 12905.551: 90.1087% ( 29) 00:07:15.413 12905.551 - 13006.375: 90.4620% ( 52) 00:07:15.413 13006.375 - 13107.200: 90.7948% ( 49) 00:07:15.413 13107.200 - 13208.025: 91.0734% ( 41) 00:07:15.413 13208.025 - 13308.849: 91.4062% ( 49) 00:07:15.413 13308.849 - 13409.674: 91.7663% ( 53) 00:07:15.413 13409.674 - 13510.498: 92.1060% ( 50) 00:07:15.413 13510.498 - 13611.323: 92.5272% ( 62) 00:07:15.413 13611.323 - 13712.148: 92.9959% ( 69) 00:07:15.413 13712.148 - 13812.972: 93.3356% ( 50) 00:07:15.413 13812.972 - 13913.797: 93.7500% ( 61) 00:07:15.413 13913.797 - 14014.622: 94.1304% ( 56) 00:07:15.413 14014.622 - 14115.446: 94.4429% ( 46) 00:07:15.413 14115.446 - 14216.271: 94.7962% ( 52) 00:07:15.413 14216.271 - 14317.095: 95.1019% ( 45) 00:07:15.413 14317.095 - 14417.920: 95.3261% ( 33) 00:07:15.413 14417.920 - 14518.745: 95.4959% ( 25) 00:07:15.413 14518.745 - 14619.569: 95.6658% ( 25) 00:07:15.413 14619.569 - 14720.394: 95.8288% ( 24) 00:07:15.413 14720.394 - 14821.218: 95.9783% ( 22) 00:07:15.413 14821.218 - 14922.043: 96.1345% ( 23) 00:07:15.413 14922.043 - 15022.868: 96.4946% ( 53) 00:07:15.413 15022.868 - 15123.692: 96.7120% ( 32) 00:07:15.413 15123.692 - 15224.517: 96.8954% ( 27) 00:07:15.413 15224.517 - 15325.342: 97.1332% ( 35) 00:07:15.413 15325.342 - 15426.166: 97.3098% ( 26) 00:07:15.413 15426.166 - 15526.991: 97.4932% ( 27) 00:07:15.413 15526.991 - 15627.815: 97.6970% ( 30) 00:07:15.413 15627.815 - 15728.640: 97.9144% ( 32) 00:07:15.413 15728.640 - 15829.465: 98.1114% ( 29) 00:07:15.413 15829.465 - 15930.289: 98.3016% ( 28) 00:07:15.413 15930.289 - 16031.114: 98.4443% ( 21) 00:07:15.413 16031.114 - 16131.938: 98.5734% ( 19) 00:07:15.413 16131.938 - 16232.763: 98.6413% ( 10) 00:07:15.413 16232.763 - 16333.588: 98.7296% ( 13) 00:07:15.413 16333.588 - 16434.412: 98.7908% ( 9) 00:07:15.413 16434.412 - 16535.237: 98.8451% ( 8) 00:07:15.413 16535.237 - 16636.062: 98.8995% ( 8) 00:07:15.413 16636.062 - 16736.886: 98.9946% ( 14) 00:07:15.413 16736.886 - 16837.711: 99.0285% ( 5) 00:07:15.413 16837.711 - 16938.535: 99.0693% ( 6) 00:07:15.413 16938.535 - 17039.360: 99.0965% ( 4) 00:07:15.413 17039.360 - 17140.185: 99.1304% ( 5) 00:07:15.413 28432.542 - 28634.191: 99.1372% ( 1) 00:07:15.413 28634.191 - 28835.840: 99.1780% ( 6) 00:07:15.413 28835.840 - 29037.489: 99.2323% ( 8) 00:07:15.413 29037.489 - 29239.138: 99.2799% ( 7) 00:07:15.413 29239.138 - 29440.788: 99.3342% ( 8) 00:07:15.413 29440.788 - 29642.437: 99.3818% ( 7) 00:07:15.413 29642.437 - 29844.086: 99.4293% ( 7) 00:07:15.413 29844.086 - 30045.735: 99.4837% ( 8) 00:07:15.413 30045.735 - 30247.385: 99.5312% ( 7) 00:07:15.413 30247.385 - 30449.034: 99.5652% ( 5) 00:07:15.413 36700.160 - 36901.809: 99.6128% ( 7) 00:07:15.413 36901.809 - 37103.458: 99.6671% ( 8) 00:07:15.413 37103.458 - 37305.108: 99.7147% ( 7) 00:07:15.413 37305.108 - 37506.757: 99.7622% ( 7) 00:07:15.413 37506.757 - 37708.406: 99.8098% ( 7) 00:07:15.413 37708.406 - 37910.055: 99.8573% ( 7) 00:07:15.413 37910.055 - 38111.705: 99.9049% ( 7) 00:07:15.413 38111.705 - 38313.354: 99.9524% ( 7) 00:07:15.413 38313.354 - 38515.003: 100.0000% ( 7) 00:07:15.413 00:07:15.413 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:15.413 ============================================================================== 00:07:15.413 Range in us Cumulative IO count 00:07:15.413 5696.591 - 5721.797: 0.0068% ( 1) 00:07:15.413 5747.003 - 5772.209: 0.0136% ( 1) 00:07:15.413 5797.415 - 5822.622: 0.0204% ( 1) 00:07:15.413 5822.622 - 5847.828: 0.0340% ( 2) 00:07:15.413 5847.828 - 5873.034: 0.0543% ( 3) 00:07:15.413 5873.034 - 5898.240: 0.1019% ( 7) 00:07:15.413 5898.240 - 5923.446: 0.1427% ( 6) 00:07:15.413 5923.446 - 5948.652: 0.1834% ( 6) 00:07:15.413 5948.652 - 5973.858: 0.2106% ( 4) 00:07:15.413 5973.858 - 5999.065: 0.4212% ( 31) 00:07:15.413 5999.065 - 6024.271: 0.4688% ( 7) 00:07:15.413 6024.271 - 6049.477: 0.5163% ( 7) 00:07:15.413 6049.477 - 6074.683: 0.6793% ( 24) 00:07:15.413 6074.683 - 6099.889: 0.7201% ( 6) 00:07:15.413 6099.889 - 6125.095: 0.7677% ( 7) 00:07:15.413 6125.095 - 6150.302: 0.8764% ( 16) 00:07:15.413 6150.302 - 6175.508: 0.9375% ( 9) 00:07:15.413 6175.508 - 6200.714: 1.0462% ( 16) 00:07:15.413 6200.714 - 6225.920: 1.1549% ( 16) 00:07:15.413 6225.920 - 6251.126: 1.2228% ( 10) 00:07:15.413 6251.126 - 6276.332: 1.3451% ( 18) 00:07:15.413 6276.332 - 6301.538: 1.5625% ( 32) 00:07:15.413 6301.538 - 6326.745: 1.9497% ( 57) 00:07:15.413 6326.745 - 6351.951: 2.1060% ( 23) 00:07:15.413 6351.951 - 6377.157: 2.2554% ( 22) 00:07:15.413 6377.157 - 6402.363: 2.4796% ( 33) 00:07:15.413 6402.363 - 6427.569: 2.7717% ( 43) 00:07:15.413 6427.569 - 6452.775: 3.0774% ( 45) 00:07:15.413 6452.775 - 6503.188: 3.9606% ( 130) 00:07:15.413 6503.188 - 6553.600: 5.1630% ( 177) 00:07:15.413 6553.600 - 6604.012: 7.2351% ( 305) 00:07:15.413 6604.012 - 6654.425: 9.5856% ( 346) 00:07:15.413 6654.425 - 6704.837: 11.6440% ( 303) 00:07:15.413 6704.837 - 6755.249: 15.0883% ( 507) 00:07:15.413 6755.249 - 6805.662: 18.5054% ( 503) 00:07:15.413 6805.662 - 6856.074: 22.3370% ( 564) 00:07:15.413 6856.074 - 6906.486: 27.1671% ( 711) 00:07:15.413 6906.486 - 6956.898: 30.7473% ( 527) 00:07:15.413 6956.898 - 7007.311: 34.4497% ( 545) 00:07:15.413 7007.311 - 7057.723: 38.2948% ( 566) 00:07:15.413 7057.723 - 7108.135: 41.2704% ( 438) 00:07:15.413 7108.135 - 7158.548: 44.0693% ( 412) 00:07:15.413 7158.548 - 7208.960: 46.5557% ( 366) 00:07:15.413 7208.960 - 7259.372: 49.2188% ( 392) 00:07:15.413 7259.372 - 7309.785: 51.0394% ( 268) 00:07:15.413 7309.785 - 7360.197: 53.0027% ( 289) 00:07:15.413 7360.197 - 7410.609: 54.2935% ( 190) 00:07:15.413 7410.609 - 7461.022: 55.4076% ( 164) 00:07:15.413 7461.022 - 7511.434: 56.3179% ( 134) 00:07:15.413 7511.434 - 7561.846: 57.3573% ( 153) 00:07:15.413 7561.846 - 7612.258: 58.5326% ( 173) 00:07:15.413 7612.258 - 7662.671: 59.2799% ( 110) 00:07:15.413 7662.671 - 7713.083: 60.0679% ( 116) 00:07:15.413 7713.083 - 7763.495: 60.6793% ( 90) 00:07:15.413 7763.495 - 7813.908: 61.4266% ( 110) 00:07:15.413 7813.908 - 7864.320: 62.1875% ( 112) 00:07:15.413 7864.320 - 7914.732: 62.5679% ( 56) 00:07:15.413 7914.732 - 7965.145: 63.0435% ( 70) 00:07:15.413 7965.145 - 8015.557: 63.5598% ( 76) 00:07:15.413 8015.557 - 8065.969: 64.0897% ( 78) 00:07:15.413 8065.969 - 8116.382: 64.5584% ( 69) 00:07:15.414 8116.382 - 8166.794: 64.8981% ( 50) 00:07:15.414 8166.794 - 8217.206: 65.2310% ( 49) 00:07:15.414 8217.206 - 8267.618: 65.5639% ( 49) 00:07:15.414 8267.618 - 8318.031: 65.9171% ( 52) 00:07:15.414 8318.031 - 8368.443: 66.2092% ( 43) 00:07:15.414 8368.443 - 8418.855: 66.6372% ( 63) 00:07:15.414 8418.855 - 8469.268: 66.9769% ( 50) 00:07:15.414 8469.268 - 8519.680: 67.4932% ( 76) 00:07:15.414 8519.680 - 8570.092: 68.1114% ( 91) 00:07:15.414 8570.092 - 8620.505: 68.5394% ( 63) 00:07:15.414 8620.505 - 8670.917: 68.8723% ( 49) 00:07:15.414 8670.917 - 8721.329: 69.3071% ( 64) 00:07:15.414 8721.329 - 8771.742: 69.8641% ( 82) 00:07:15.414 8771.742 - 8822.154: 70.4755% ( 90) 00:07:15.414 8822.154 - 8872.566: 71.0258% ( 81) 00:07:15.414 8872.566 - 8922.978: 71.5761% ( 81) 00:07:15.414 8922.978 - 8973.391: 72.3505% ( 114) 00:07:15.414 8973.391 - 9023.803: 72.8193% ( 69) 00:07:15.414 9023.803 - 9074.215: 73.2065% ( 57) 00:07:15.414 9074.215 - 9124.628: 73.4579% ( 37) 00:07:15.414 9124.628 - 9175.040: 73.7296% ( 40) 00:07:15.414 9175.040 - 9225.452: 74.0014% ( 40) 00:07:15.414 9225.452 - 9275.865: 74.1848% ( 27) 00:07:15.414 9275.865 - 9326.277: 74.3546% ( 25) 00:07:15.414 9326.277 - 9376.689: 74.4973% ( 21) 00:07:15.414 9376.689 - 9427.102: 74.6807% ( 27) 00:07:15.414 9427.102 - 9477.514: 74.9253% ( 36) 00:07:15.414 9477.514 - 9527.926: 75.1698% ( 36) 00:07:15.414 9527.926 - 9578.338: 75.4416% ( 40) 00:07:15.414 9578.338 - 9628.751: 75.6658% ( 33) 00:07:15.414 9628.751 - 9679.163: 75.8220% ( 23) 00:07:15.414 9679.163 - 9729.575: 76.0326% ( 31) 00:07:15.414 9729.575 - 9779.988: 76.3315% ( 44) 00:07:15.414 9779.988 - 9830.400: 76.5965% ( 39) 00:07:15.414 9830.400 - 9880.812: 76.7527% ( 23) 00:07:15.414 9880.812 - 9931.225: 76.8886% ( 20) 00:07:15.414 9931.225 - 9981.637: 77.0177% ( 19) 00:07:15.414 9981.637 - 10032.049: 77.1399% ( 18) 00:07:15.414 10032.049 - 10082.462: 77.3098% ( 25) 00:07:15.414 10082.462 - 10132.874: 77.5272% ( 32) 00:07:15.414 10132.874 - 10183.286: 77.7446% ( 32) 00:07:15.414 10183.286 - 10233.698: 77.9823% ( 35) 00:07:15.414 10233.698 - 10284.111: 78.2745% ( 43) 00:07:15.414 10284.111 - 10334.523: 78.6957% ( 62) 00:07:15.414 10334.523 - 10384.935: 78.9266% ( 34) 00:07:15.414 10384.935 - 10435.348: 79.1712% ( 36) 00:07:15.414 10435.348 - 10485.760: 79.4022% ( 34) 00:07:15.414 10485.760 - 10536.172: 79.6671% ( 39) 00:07:15.414 10536.172 - 10586.585: 79.9253% ( 38) 00:07:15.414 10586.585 - 10636.997: 80.1970% ( 40) 00:07:15.414 10636.997 - 10687.409: 80.5435% ( 51) 00:07:15.414 10687.409 - 10737.822: 80.8967% ( 52) 00:07:15.414 10737.822 - 10788.234: 81.1889% ( 43) 00:07:15.414 10788.234 - 10838.646: 81.4606% ( 40) 00:07:15.414 10838.646 - 10889.058: 81.7595% ( 44) 00:07:15.414 10889.058 - 10939.471: 82.0788% ( 47) 00:07:15.414 10939.471 - 10989.883: 82.3709% ( 43) 00:07:15.414 10989.883 - 11040.295: 82.6698% ( 44) 00:07:15.414 11040.295 - 11090.708: 83.0299% ( 53) 00:07:15.414 11090.708 - 11141.120: 83.2609% ( 34) 00:07:15.414 11141.120 - 11191.532: 83.4579% ( 29) 00:07:15.414 11191.532 - 11241.945: 83.6277% ( 25) 00:07:15.414 11241.945 - 11292.357: 83.8655% ( 35) 00:07:15.414 11292.357 - 11342.769: 84.0761% ( 31) 00:07:15.414 11342.769 - 11393.182: 84.2255% ( 22) 00:07:15.414 11393.182 - 11443.594: 84.3954% ( 25) 00:07:15.414 11443.594 - 11494.006: 84.5924% ( 29) 00:07:15.414 11494.006 - 11544.418: 84.7758% ( 27) 00:07:15.414 11544.418 - 11594.831: 84.9185% ( 21) 00:07:15.414 11594.831 - 11645.243: 85.1223% ( 30) 00:07:15.414 11645.243 - 11695.655: 85.2717% ( 22) 00:07:15.414 11695.655 - 11746.068: 85.4823% ( 31) 00:07:15.414 11746.068 - 11796.480: 85.7609% ( 41) 00:07:15.414 11796.480 - 11846.892: 86.0666% ( 45) 00:07:15.414 11846.892 - 11897.305: 86.4266% ( 53) 00:07:15.414 11897.305 - 11947.717: 86.7527% ( 48) 00:07:15.414 11947.717 - 11998.129: 87.0992% ( 51) 00:07:15.414 11998.129 - 12048.542: 87.3573% ( 38) 00:07:15.414 12048.542 - 12098.954: 87.6359% ( 41) 00:07:15.414 12098.954 - 12149.366: 88.1250% ( 72) 00:07:15.414 12149.366 - 12199.778: 88.3696% ( 36) 00:07:15.414 12199.778 - 12250.191: 88.5938% ( 33) 00:07:15.414 12250.191 - 12300.603: 88.7704% ( 26) 00:07:15.414 12300.603 - 12351.015: 88.9334% ( 24) 00:07:15.414 12351.015 - 12401.428: 89.1236% ( 28) 00:07:15.414 12401.428 - 12451.840: 89.3818% ( 38) 00:07:15.414 12451.840 - 12502.252: 89.5788% ( 29) 00:07:15.414 12502.252 - 12552.665: 89.8030% ( 33) 00:07:15.414 12552.665 - 12603.077: 89.9864% ( 27) 00:07:15.414 12603.077 - 12653.489: 90.2378% ( 37) 00:07:15.414 12653.489 - 12703.902: 90.4144% ( 26) 00:07:15.414 12703.902 - 12754.314: 90.5503% ( 20) 00:07:15.414 12754.314 - 12804.726: 90.6997% ( 22) 00:07:15.414 12804.726 - 12855.138: 90.8424% ( 21) 00:07:15.414 12855.138 - 12905.551: 91.0054% ( 24) 00:07:15.414 12905.551 - 13006.375: 91.2092% ( 30) 00:07:15.414 13006.375 - 13107.200: 91.4334% ( 33) 00:07:15.414 13107.200 - 13208.025: 91.6304% ( 29) 00:07:15.414 13208.025 - 13308.849: 91.9022% ( 40) 00:07:15.414 13308.849 - 13409.674: 92.2622% ( 53) 00:07:15.414 13409.674 - 13510.498: 92.4253% ( 24) 00:07:15.414 13510.498 - 13611.323: 92.6087% ( 27) 00:07:15.414 13611.323 - 13712.148: 92.8872% ( 41) 00:07:15.414 13712.148 - 13812.972: 93.2812% ( 58) 00:07:15.414 13812.972 - 13913.797: 93.7772% ( 73) 00:07:15.414 13913.797 - 14014.622: 94.0965% ( 47) 00:07:15.414 14014.622 - 14115.446: 94.4701% ( 55) 00:07:15.414 14115.446 - 14216.271: 94.7554% ( 42) 00:07:15.414 14216.271 - 14317.095: 94.9321% ( 26) 00:07:15.414 14317.095 - 14417.920: 95.0543% ( 18) 00:07:15.414 14417.920 - 14518.745: 95.1223% ( 10) 00:07:15.414 14518.745 - 14619.569: 95.1970% ( 11) 00:07:15.414 14619.569 - 14720.394: 95.2921% ( 14) 00:07:15.414 14720.394 - 14821.218: 95.4552% ( 24) 00:07:15.414 14821.218 - 14922.043: 95.6929% ( 35) 00:07:15.414 14922.043 - 15022.868: 96.0394% ( 51) 00:07:15.414 15022.868 - 15123.692: 96.3519% ( 46) 00:07:15.414 15123.692 - 15224.517: 96.8139% ( 68) 00:07:15.414 15224.517 - 15325.342: 97.1671% ( 52) 00:07:15.414 15325.342 - 15426.166: 97.4796% ( 46) 00:07:15.414 15426.166 - 15526.991: 97.7717% ( 43) 00:07:15.414 15526.991 - 15627.815: 98.0367% ( 39) 00:07:15.414 15627.815 - 15728.640: 98.2473% ( 31) 00:07:15.414 15728.640 - 15829.465: 98.3696% ( 18) 00:07:15.414 15829.465 - 15930.289: 98.4783% ( 16) 00:07:15.414 15930.289 - 16031.114: 98.5394% ( 9) 00:07:15.414 16031.114 - 16131.938: 98.5870% ( 7) 00:07:15.414 16131.938 - 16232.763: 98.6277% ( 6) 00:07:15.414 16232.763 - 16333.588: 98.6685% ( 6) 00:07:15.414 16333.588 - 16434.412: 98.6957% ( 4) 00:07:15.414 16736.886 - 16837.711: 98.7024% ( 1) 00:07:15.414 16837.711 - 16938.535: 98.7092% ( 1) 00:07:15.414 16938.535 - 17039.360: 98.7908% ( 12) 00:07:15.414 17039.360 - 17140.185: 98.8859% ( 14) 00:07:15.414 17140.185 - 17241.009: 99.0082% ( 18) 00:07:15.414 17241.009 - 17341.834: 99.0761% ( 10) 00:07:15.414 17341.834 - 17442.658: 99.1304% ( 8) 00:07:15.414 27424.295 - 27625.945: 99.1372% ( 1) 00:07:15.414 27625.945 - 27827.594: 99.1848% ( 7) 00:07:15.414 27827.594 - 28029.243: 99.2323% ( 7) 00:07:15.414 28029.243 - 28230.892: 99.2867% ( 8) 00:07:15.414 28230.892 - 28432.542: 99.3274% ( 6) 00:07:15.414 28432.542 - 28634.191: 99.3750% ( 7) 00:07:15.414 28634.191 - 28835.840: 99.4226% ( 7) 00:07:15.414 28835.840 - 29037.489: 99.4769% ( 8) 00:07:15.414 29037.489 - 29239.138: 99.5245% ( 7) 00:07:15.414 29239.138 - 29440.788: 99.5652% ( 6) 00:07:15.414 36095.212 - 36296.862: 99.5720% ( 1) 00:07:15.414 36296.862 - 36498.511: 99.6196% ( 7) 00:07:15.414 36498.511 - 36700.160: 99.6671% ( 7) 00:07:15.414 36700.160 - 36901.809: 99.7147% ( 7) 00:07:15.414 36901.809 - 37103.458: 99.7622% ( 7) 00:07:15.414 37103.458 - 37305.108: 99.8166% ( 8) 00:07:15.414 37305.108 - 37506.757: 99.8641% ( 7) 00:07:15.414 37506.757 - 37708.406: 99.9117% ( 7) 00:07:15.414 37708.406 - 37910.055: 99.9524% ( 6) 00:07:15.414 37910.055 - 38111.705: 100.0000% ( 7) 00:07:15.414 00:07:15.414 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:15.414 ============================================================================== 00:07:15.414 Range in us Cumulative IO count 00:07:15.414 5847.828 - 5873.034: 0.0136% ( 2) 00:07:15.414 5873.034 - 5898.240: 0.0543% ( 6) 00:07:15.414 5898.240 - 5923.446: 0.1019% ( 7) 00:07:15.414 5923.446 - 5948.652: 0.1495% ( 7) 00:07:15.414 5948.652 - 5973.858: 0.2038% ( 8) 00:07:15.414 5973.858 - 5999.065: 0.2785% ( 11) 00:07:15.414 5999.065 - 6024.271: 0.3940% ( 17) 00:07:15.414 6024.271 - 6049.477: 0.5435% ( 22) 00:07:15.414 6049.477 - 6074.683: 0.6997% ( 23) 00:07:15.414 6074.683 - 6099.889: 0.7473% ( 7) 00:07:15.414 6099.889 - 6125.095: 0.8084% ( 9) 00:07:15.414 6125.095 - 6150.302: 0.8492% ( 6) 00:07:15.414 6150.302 - 6175.508: 0.9035% ( 8) 00:07:15.414 6175.508 - 6200.714: 0.9715% ( 10) 00:07:15.414 6200.714 - 6225.920: 1.0530% ( 12) 00:07:15.414 6225.920 - 6251.126: 1.1277% ( 11) 00:07:15.414 6251.126 - 6276.332: 1.2432% ( 17) 00:07:15.414 6276.332 - 6301.538: 1.3655% ( 18) 00:07:15.414 6301.538 - 6326.745: 1.5693% ( 30) 00:07:15.414 6326.745 - 6351.951: 1.9090% ( 50) 00:07:15.414 6351.951 - 6377.157: 2.1196% ( 31) 00:07:15.414 6377.157 - 6402.363: 2.4049% ( 42) 00:07:15.414 6402.363 - 6427.569: 2.8533% ( 66) 00:07:15.414 6427.569 - 6452.775: 3.2609% ( 60) 00:07:15.414 6452.775 - 6503.188: 4.0421% ( 115) 00:07:15.414 6503.188 - 6553.600: 5.0272% ( 145) 00:07:15.414 6553.600 - 6604.012: 6.5965% ( 231) 00:07:15.414 6604.012 - 6654.425: 8.4171% ( 268) 00:07:15.414 6654.425 - 6704.837: 11.2092% ( 411) 00:07:15.414 6704.837 - 6755.249: 14.0625% ( 420) 00:07:15.414 6755.249 - 6805.662: 17.9552% ( 573) 00:07:15.414 6805.662 - 6856.074: 21.7188% ( 554) 00:07:15.414 6856.074 - 6906.486: 26.3655% ( 684) 00:07:15.414 6906.486 - 6956.898: 30.4008% ( 594) 00:07:15.414 6956.898 - 7007.311: 34.3003% ( 574) 00:07:15.414 7007.311 - 7057.723: 38.0774% ( 556) 00:07:15.414 7057.723 - 7108.135: 41.5761% ( 515) 00:07:15.415 7108.135 - 7158.548: 44.8234% ( 478) 00:07:15.415 7158.548 - 7208.960: 47.5476% ( 401) 00:07:15.415 7208.960 - 7259.372: 49.9660% ( 356) 00:07:15.415 7259.372 - 7309.785: 52.1399% ( 320) 00:07:15.415 7309.785 - 7360.197: 53.6821% ( 227) 00:07:15.415 7360.197 - 7410.609: 54.6875% ( 148) 00:07:15.415 7410.609 - 7461.022: 55.6318% ( 139) 00:07:15.415 7461.022 - 7511.434: 56.3655% ( 108) 00:07:15.415 7511.434 - 7561.846: 57.3505% ( 145) 00:07:15.415 7561.846 - 7612.258: 58.0707% ( 106) 00:07:15.415 7612.258 - 7662.671: 58.7296% ( 97) 00:07:15.415 7662.671 - 7713.083: 59.4226% ( 102) 00:07:15.415 7713.083 - 7763.495: 60.0136% ( 87) 00:07:15.415 7763.495 - 7813.908: 60.5774% ( 83) 00:07:15.415 7813.908 - 7864.320: 60.9851% ( 60) 00:07:15.415 7864.320 - 7914.732: 61.6780% ( 102) 00:07:15.415 7914.732 - 7965.145: 62.4660% ( 116) 00:07:15.415 7965.145 - 8015.557: 63.1658% ( 103) 00:07:15.415 8015.557 - 8065.969: 63.5734% ( 60) 00:07:15.415 8065.969 - 8116.382: 64.0965% ( 77) 00:07:15.415 8116.382 - 8166.794: 64.5177% ( 62) 00:07:15.415 8166.794 - 8217.206: 65.3736% ( 126) 00:07:15.415 8217.206 - 8267.618: 65.8152% ( 65) 00:07:15.415 8267.618 - 8318.031: 66.3587% ( 80) 00:07:15.415 8318.031 - 8368.443: 67.0177% ( 97) 00:07:15.415 8368.443 - 8418.855: 67.4592% ( 65) 00:07:15.415 8418.855 - 8469.268: 67.8736% ( 61) 00:07:15.415 8469.268 - 8519.680: 68.1386% ( 39) 00:07:15.415 8519.680 - 8570.092: 68.4443% ( 45) 00:07:15.415 8570.092 - 8620.505: 68.7636% ( 47) 00:07:15.415 8620.505 - 8670.917: 69.2459% ( 71) 00:07:15.415 8670.917 - 8721.329: 69.4973% ( 37) 00:07:15.415 8721.329 - 8771.742: 70.0000% ( 74) 00:07:15.415 8771.742 - 8822.154: 70.4212% ( 62) 00:07:15.415 8822.154 - 8872.566: 70.6793% ( 38) 00:07:15.415 8872.566 - 8922.978: 71.0734% ( 58) 00:07:15.415 8922.978 - 8973.391: 71.5625% ( 72) 00:07:15.415 8973.391 - 9023.803: 72.4592% ( 132) 00:07:15.415 9023.803 - 9074.215: 72.9755% ( 76) 00:07:15.415 9074.215 - 9124.628: 73.3424% ( 54) 00:07:15.415 9124.628 - 9175.040: 73.7500% ( 60) 00:07:15.415 9175.040 - 9225.452: 74.1780% ( 63) 00:07:15.415 9225.452 - 9275.865: 74.3886% ( 31) 00:07:15.415 9275.865 - 9326.277: 74.5652% ( 26) 00:07:15.415 9326.277 - 9376.689: 74.7418% ( 26) 00:07:15.415 9376.689 - 9427.102: 74.9796% ( 35) 00:07:15.415 9427.102 - 9477.514: 75.1291% ( 22) 00:07:15.415 9477.514 - 9527.926: 75.2446% ( 17) 00:07:15.415 9527.926 - 9578.338: 75.3940% ( 22) 00:07:15.415 9578.338 - 9628.751: 75.6658% ( 40) 00:07:15.415 9628.751 - 9679.163: 75.7337% ( 10) 00:07:15.415 9679.163 - 9729.575: 75.9035% ( 25) 00:07:15.415 9729.575 - 9779.988: 76.2228% ( 47) 00:07:15.415 9779.988 - 9830.400: 76.3859% ( 24) 00:07:15.415 9830.400 - 9880.812: 76.5489% ( 24) 00:07:15.415 9880.812 - 9931.225: 76.7323% ( 27) 00:07:15.415 9931.225 - 9981.637: 76.9633% ( 34) 00:07:15.415 9981.637 - 10032.049: 77.1671% ( 30) 00:07:15.415 10032.049 - 10082.462: 77.3709% ( 30) 00:07:15.415 10082.462 - 10132.874: 77.6630% ( 43) 00:07:15.415 10132.874 - 10183.286: 77.9144% ( 37) 00:07:15.415 10183.286 - 10233.698: 78.1250% ( 31) 00:07:15.415 10233.698 - 10284.111: 78.3356% ( 31) 00:07:15.415 10284.111 - 10334.523: 78.6073% ( 40) 00:07:15.415 10334.523 - 10384.935: 78.8451% ( 35) 00:07:15.415 10384.935 - 10435.348: 79.2255% ( 56) 00:07:15.415 10435.348 - 10485.760: 79.6264% ( 59) 00:07:15.415 10485.760 - 10536.172: 79.8777% ( 37) 00:07:15.415 10536.172 - 10586.585: 80.0951% ( 32) 00:07:15.415 10586.585 - 10636.997: 80.3261% ( 34) 00:07:15.415 10636.997 - 10687.409: 80.6658% ( 50) 00:07:15.415 10687.409 - 10737.822: 81.0938% ( 63) 00:07:15.415 10737.822 - 10788.234: 81.4334% ( 50) 00:07:15.415 10788.234 - 10838.646: 81.6644% ( 34) 00:07:15.415 10838.646 - 10889.058: 81.9022% ( 35) 00:07:15.415 10889.058 - 10939.471: 82.1332% ( 34) 00:07:15.415 10939.471 - 10989.883: 82.3981% ( 39) 00:07:15.415 10989.883 - 11040.295: 82.6495% ( 37) 00:07:15.415 11040.295 - 11090.708: 83.0367% ( 57) 00:07:15.415 11090.708 - 11141.120: 83.3424% ( 45) 00:07:15.415 11141.120 - 11191.532: 83.7432% ( 59) 00:07:15.415 11191.532 - 11241.945: 83.9402% ( 29) 00:07:15.415 11241.945 - 11292.357: 84.1984% ( 38) 00:07:15.415 11292.357 - 11342.769: 84.5516% ( 52) 00:07:15.415 11342.769 - 11393.182: 84.8370% ( 42) 00:07:15.415 11393.182 - 11443.594: 85.0272% ( 28) 00:07:15.415 11443.594 - 11494.006: 85.2717% ( 36) 00:07:15.415 11494.006 - 11544.418: 85.4891% ( 32) 00:07:15.415 11544.418 - 11594.831: 85.6658% ( 26) 00:07:15.415 11594.831 - 11645.243: 85.8356% ( 25) 00:07:15.415 11645.243 - 11695.655: 85.9647% ( 19) 00:07:15.415 11695.655 - 11746.068: 86.0870% ( 18) 00:07:15.415 11746.068 - 11796.480: 86.1889% ( 15) 00:07:15.415 11796.480 - 11846.892: 86.3247% ( 20) 00:07:15.415 11846.892 - 11897.305: 86.4674% ( 21) 00:07:15.415 11897.305 - 11947.717: 86.6236% ( 23) 00:07:15.415 11947.717 - 11998.129: 86.8954% ( 40) 00:07:15.415 11998.129 - 12048.542: 87.1467% ( 37) 00:07:15.415 12048.542 - 12098.954: 87.2962% ( 22) 00:07:15.415 12098.954 - 12149.366: 87.5340% ( 35) 00:07:15.415 12149.366 - 12199.778: 87.7717% ( 35) 00:07:15.415 12199.778 - 12250.191: 87.9755% ( 30) 00:07:15.415 12250.191 - 12300.603: 88.2065% ( 34) 00:07:15.415 12300.603 - 12351.015: 88.4986% ( 43) 00:07:15.415 12351.015 - 12401.428: 88.7500% ( 37) 00:07:15.415 12401.428 - 12451.840: 89.0897% ( 50) 00:07:15.415 12451.840 - 12502.252: 89.4565% ( 54) 00:07:15.415 12502.252 - 12552.665: 89.7758% ( 47) 00:07:15.415 12552.665 - 12603.077: 90.0408% ( 39) 00:07:15.415 12603.077 - 12653.489: 90.2582% ( 32) 00:07:15.415 12653.489 - 12703.902: 90.4348% ( 26) 00:07:15.415 12703.902 - 12754.314: 90.5707% ( 20) 00:07:15.415 12754.314 - 12804.726: 90.7201% ( 22) 00:07:15.415 12804.726 - 12855.138: 90.8899% ( 25) 00:07:15.415 12855.138 - 12905.551: 91.0258% ( 20) 00:07:15.415 12905.551 - 13006.375: 91.5285% ( 74) 00:07:15.415 13006.375 - 13107.200: 91.9973% ( 69) 00:07:15.415 13107.200 - 13208.025: 92.3981% ( 59) 00:07:15.416 13208.025 - 13308.849: 92.6834% ( 42) 00:07:15.416 13308.849 - 13409.674: 92.8329% ( 22) 00:07:15.416 13409.674 - 13510.498: 92.9144% ( 12) 00:07:15.416 13510.498 - 13611.323: 93.0367% ( 18) 00:07:15.416 13611.323 - 13712.148: 93.2609% ( 33) 00:07:15.416 13712.148 - 13812.972: 93.6141% ( 52) 00:07:15.416 13812.972 - 13913.797: 93.8111% ( 29) 00:07:15.416 13913.797 - 14014.622: 94.0082% ( 29) 00:07:15.416 14014.622 - 14115.446: 94.1576% ( 22) 00:07:15.416 14115.446 - 14216.271: 94.3207% ( 24) 00:07:15.416 14216.271 - 14317.095: 94.5312% ( 31) 00:07:15.416 14317.095 - 14417.920: 94.9524% ( 62) 00:07:15.416 14417.920 - 14518.745: 95.3125% ( 53) 00:07:15.416 14518.745 - 14619.569: 95.6318% ( 47) 00:07:15.416 14619.569 - 14720.394: 95.8696% ( 35) 00:07:15.416 14720.394 - 14821.218: 96.0598% ( 28) 00:07:15.416 14821.218 - 14922.043: 96.4606% ( 59) 00:07:15.416 14922.043 - 15022.868: 96.6168% ( 23) 00:07:15.416 15022.868 - 15123.692: 96.7323% ( 17) 00:07:15.416 15123.692 - 15224.517: 96.8274% ( 14) 00:07:15.416 15224.517 - 15325.342: 96.9769% ( 22) 00:07:15.416 15325.342 - 15426.166: 97.1332% ( 23) 00:07:15.416 15426.166 - 15526.991: 97.2554% ( 18) 00:07:15.416 15526.991 - 15627.815: 97.4253% ( 25) 00:07:15.416 15627.815 - 15728.640: 97.6291% ( 30) 00:07:15.416 15728.640 - 15829.465: 97.8057% ( 26) 00:07:15.416 15829.465 - 15930.289: 98.0027% ( 29) 00:07:15.416 15930.289 - 16031.114: 98.1454% ( 21) 00:07:15.416 16031.114 - 16131.938: 98.2948% ( 22) 00:07:15.416 16131.938 - 16232.763: 98.3764% ( 12) 00:07:15.416 16232.763 - 16333.588: 98.4918% ( 17) 00:07:15.416 16333.588 - 16434.412: 98.6209% ( 19) 00:07:15.416 16434.412 - 16535.237: 98.7296% ( 16) 00:07:15.416 16535.237 - 16636.062: 98.8315% ( 15) 00:07:15.416 16636.062 - 16736.886: 98.9130% ( 12) 00:07:15.416 16736.886 - 16837.711: 98.9538% ( 6) 00:07:15.416 16837.711 - 16938.535: 99.0014% ( 7) 00:07:15.416 16938.535 - 17039.360: 99.0421% ( 6) 00:07:15.416 17039.360 - 17140.185: 99.0829% ( 6) 00:07:15.416 17140.185 - 17241.009: 99.1236% ( 6) 00:07:15.416 17241.009 - 17341.834: 99.1304% ( 1) 00:07:15.416 26617.698 - 26819.348: 99.1576% ( 4) 00:07:15.416 26819.348 - 27020.997: 99.1984% ( 6) 00:07:15.416 27020.997 - 27222.646: 99.2459% ( 7) 00:07:15.416 27222.646 - 27424.295: 99.2935% ( 7) 00:07:15.416 27424.295 - 27625.945: 99.3478% ( 8) 00:07:15.416 27625.945 - 27827.594: 99.3954% ( 7) 00:07:15.416 27827.594 - 28029.243: 99.4429% ( 7) 00:07:15.416 28029.243 - 28230.892: 99.4905% ( 7) 00:07:15.416 28230.892 - 28432.542: 99.5448% ( 8) 00:07:15.416 28432.542 - 28634.191: 99.5652% ( 3) 00:07:15.416 34482.018 - 34683.668: 99.5924% ( 4) 00:07:15.416 34683.668 - 34885.317: 99.6399% ( 7) 00:07:15.416 34885.317 - 35086.966: 99.6875% ( 7) 00:07:15.416 35086.966 - 35288.615: 99.7351% ( 7) 00:07:15.416 35288.615 - 35490.265: 99.7826% ( 7) 00:07:15.416 35490.265 - 35691.914: 99.8302% ( 7) 00:07:15.416 35691.914 - 35893.563: 99.8777% ( 7) 00:07:15.416 35893.563 - 36095.212: 99.9321% ( 8) 00:07:15.416 36095.212 - 36296.862: 99.9796% ( 7) 00:07:15.416 36296.862 - 36498.511: 100.0000% ( 3) 00:07:15.416 00:07:15.416 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:15.416 ============================================================================== 00:07:15.416 Range in us Cumulative IO count 00:07:15.416 5696.591 - 5721.797: 0.0068% ( 1) 00:07:15.416 5797.415 - 5822.622: 0.0136% ( 1) 00:07:15.416 5822.622 - 5847.828: 0.0204% ( 1) 00:07:15.416 5847.828 - 5873.034: 0.0340% ( 2) 00:07:15.416 5873.034 - 5898.240: 0.0747% ( 6) 00:07:15.416 5898.240 - 5923.446: 0.1427% ( 10) 00:07:15.416 5923.446 - 5948.652: 0.2106% ( 10) 00:07:15.416 5948.652 - 5973.858: 0.2785% ( 10) 00:07:15.416 5973.858 - 5999.065: 0.4280% ( 22) 00:07:15.416 5999.065 - 6024.271: 0.5027% ( 11) 00:07:15.416 6024.271 - 6049.477: 0.6726% ( 25) 00:07:15.416 6049.477 - 6074.683: 0.7337% ( 9) 00:07:15.416 6074.683 - 6099.889: 0.7948% ( 9) 00:07:15.416 6099.889 - 6125.095: 0.8628% ( 10) 00:07:15.416 6125.095 - 6150.302: 0.9035% ( 6) 00:07:15.416 6150.302 - 6175.508: 0.9375% ( 5) 00:07:15.416 6175.508 - 6200.714: 1.0054% ( 10) 00:07:15.416 6200.714 - 6225.920: 1.0802% ( 11) 00:07:15.416 6225.920 - 6251.126: 1.1685% ( 13) 00:07:15.416 6251.126 - 6276.332: 1.2636% ( 14) 00:07:15.416 6276.332 - 6301.538: 1.4810% ( 32) 00:07:15.416 6301.538 - 6326.745: 1.6236% ( 21) 00:07:15.416 6326.745 - 6351.951: 1.9633% ( 50) 00:07:15.416 6351.951 - 6377.157: 2.1671% ( 30) 00:07:15.416 6377.157 - 6402.363: 2.5476% ( 56) 00:07:15.416 6402.363 - 6427.569: 3.1658% ( 91) 00:07:15.416 6427.569 - 6452.775: 3.4375% ( 40) 00:07:15.416 6452.775 - 6503.188: 4.2323% ( 117) 00:07:15.416 6503.188 - 6553.600: 5.1902% ( 141) 00:07:15.416 6553.600 - 6604.012: 6.7459% ( 229) 00:07:15.416 6604.012 - 6654.425: 8.6073% ( 274) 00:07:15.416 6654.425 - 6704.837: 11.3655% ( 406) 00:07:15.416 6704.837 - 6755.249: 14.4226% ( 450) 00:07:15.416 6755.249 - 6805.662: 18.3424% ( 577) 00:07:15.416 6805.662 - 6856.074: 22.6902% ( 640) 00:07:15.416 6856.074 - 6906.486: 26.4878% ( 559) 00:07:15.416 6906.486 - 6956.898: 30.4144% ( 578) 00:07:15.416 6956.898 - 7007.311: 34.8370% ( 651) 00:07:15.416 7007.311 - 7057.723: 38.2745% ( 506) 00:07:15.416 7057.723 - 7108.135: 41.2296% ( 435) 00:07:15.416 7108.135 - 7158.548: 43.8995% ( 393) 00:07:15.416 7158.548 - 7208.960: 46.9022% ( 442) 00:07:15.416 7208.960 - 7259.372: 50.0476% ( 463) 00:07:15.416 7259.372 - 7309.785: 52.0992% ( 302) 00:07:15.416 7309.785 - 7360.197: 53.5666% ( 216) 00:07:15.416 7360.197 - 7410.609: 54.7418% ( 173) 00:07:15.416 7410.609 - 7461.022: 56.0462% ( 192) 00:07:15.416 7461.022 - 7511.434: 57.0720% ( 151) 00:07:15.416 7511.434 - 7561.846: 57.9144% ( 124) 00:07:15.416 7561.846 - 7612.258: 58.4443% ( 78) 00:07:15.416 7612.258 - 7662.671: 59.0557% ( 90) 00:07:15.416 7662.671 - 7713.083: 59.7418% ( 101) 00:07:15.416 7713.083 - 7763.495: 60.2717% ( 78) 00:07:15.416 7763.495 - 7813.908: 60.9239% ( 96) 00:07:15.416 7813.908 - 7864.320: 61.3655% ( 65) 00:07:15.416 7864.320 - 7914.732: 61.7595% ( 58) 00:07:15.416 7914.732 - 7965.145: 62.2147% ( 67) 00:07:15.416 7965.145 - 8015.557: 62.6087% ( 58) 00:07:15.416 8015.557 - 8065.969: 63.3356% ( 107) 00:07:15.416 8065.969 - 8116.382: 63.9062% ( 84) 00:07:15.416 8116.382 - 8166.794: 64.4226% ( 76) 00:07:15.416 8166.794 - 8217.206: 65.3397% ( 135) 00:07:15.416 8217.206 - 8267.618: 66.0530% ( 105) 00:07:15.416 8267.618 - 8318.031: 66.6236% ( 84) 00:07:15.416 8318.031 - 8368.443: 67.1264% ( 74) 00:07:15.416 8368.443 - 8418.855: 67.5951% ( 69) 00:07:15.416 8418.855 - 8469.268: 68.0027% ( 60) 00:07:15.416 8469.268 - 8519.680: 68.1929% ( 28) 00:07:15.416 8519.680 - 8570.092: 68.3967% ( 30) 00:07:15.416 8570.092 - 8620.505: 68.6821% ( 42) 00:07:15.416 8620.505 - 8670.917: 69.0149% ( 49) 00:07:15.416 8670.917 - 8721.329: 69.4701% ( 67) 00:07:15.416 8721.329 - 8771.742: 70.1019% ( 93) 00:07:15.416 8771.742 - 8822.154: 70.5163% ( 61) 00:07:15.416 8822.154 - 8872.566: 70.9171% ( 59) 00:07:15.416 8872.566 - 8922.978: 71.4538% ( 79) 00:07:15.416 8922.978 - 8973.391: 71.8546% ( 59) 00:07:15.416 8973.391 - 9023.803: 72.5272% ( 99) 00:07:15.416 9023.803 - 9074.215: 72.8804% ( 52) 00:07:15.416 9074.215 - 9124.628: 73.0910% ( 31) 00:07:15.416 9124.628 - 9175.040: 73.3696% ( 41) 00:07:15.416 9175.040 - 9225.452: 73.7568% ( 57) 00:07:15.416 9225.452 - 9275.865: 74.0489% ( 43) 00:07:15.416 9275.865 - 9326.277: 74.2391% ( 28) 00:07:15.416 9326.277 - 9376.689: 74.7283% ( 72) 00:07:15.416 9376.689 - 9427.102: 75.0951% ( 54) 00:07:15.416 9427.102 - 9477.514: 75.3736% ( 41) 00:07:15.416 9477.514 - 9527.926: 75.5571% ( 27) 00:07:15.416 9527.926 - 9578.338: 75.8424% ( 42) 00:07:15.416 9578.338 - 9628.751: 75.9443% ( 15) 00:07:15.416 9628.751 - 9679.163: 76.0326% ( 13) 00:07:15.416 9679.163 - 9729.575: 76.2500% ( 32) 00:07:15.416 9729.575 - 9779.988: 76.4266% ( 26) 00:07:15.416 9779.988 - 9830.400: 76.4878% ( 9) 00:07:15.416 9830.400 - 9880.812: 76.5421% ( 8) 00:07:15.416 9880.812 - 9931.225: 76.6101% ( 10) 00:07:15.416 9931.225 - 9981.637: 76.7052% ( 14) 00:07:15.416 9981.637 - 10032.049: 76.8546% ( 22) 00:07:15.416 10032.049 - 10082.462: 77.0245% ( 25) 00:07:15.416 10082.462 - 10132.874: 77.2894% ( 39) 00:07:15.416 10132.874 - 10183.286: 77.6698% ( 56) 00:07:15.416 10183.286 - 10233.698: 77.9620% ( 43) 00:07:15.416 10233.698 - 10284.111: 78.3492% ( 57) 00:07:15.416 10284.111 - 10334.523: 78.6481% ( 44) 00:07:15.416 10334.523 - 10384.935: 78.9538% ( 45) 00:07:15.416 10384.935 - 10435.348: 79.2120% ( 38) 00:07:15.416 10435.348 - 10485.760: 79.4226% ( 31) 00:07:15.416 10485.760 - 10536.172: 79.6467% ( 33) 00:07:15.416 10536.172 - 10586.585: 79.8777% ( 34) 00:07:15.416 10586.585 - 10636.997: 80.2378% ( 53) 00:07:15.416 10636.997 - 10687.409: 80.4959% ( 38) 00:07:15.416 10687.409 - 10737.822: 80.7405% ( 36) 00:07:15.416 10737.822 - 10788.234: 81.0598% ( 47) 00:07:15.416 10788.234 - 10838.646: 81.2908% ( 34) 00:07:15.416 10838.646 - 10889.058: 81.5693% ( 41) 00:07:15.416 10889.058 - 10939.471: 81.8478% ( 41) 00:07:15.416 10939.471 - 10989.883: 82.1603% ( 46) 00:07:15.416 10989.883 - 11040.295: 82.5476% ( 57) 00:07:15.416 11040.295 - 11090.708: 82.7582% ( 31) 00:07:15.416 11090.708 - 11141.120: 83.0367% ( 41) 00:07:15.416 11141.120 - 11191.532: 83.2473% ( 31) 00:07:15.416 11191.532 - 11241.945: 83.4715% ( 33) 00:07:15.416 11241.945 - 11292.357: 83.7364% ( 39) 00:07:15.416 11292.357 - 11342.769: 83.9538% ( 32) 00:07:15.416 11342.769 - 11393.182: 84.2459% ( 43) 00:07:15.416 11393.182 - 11443.594: 84.5652% ( 47) 00:07:15.416 11443.594 - 11494.006: 84.8641% ( 44) 00:07:15.416 11494.006 - 11544.418: 85.1291% ( 39) 00:07:15.416 11544.418 - 11594.831: 85.4688% ( 50) 00:07:15.416 11594.831 - 11645.243: 85.7948% ( 48) 00:07:15.416 11645.243 - 11695.655: 86.2432% ( 66) 00:07:15.416 11695.655 - 11746.068: 86.4606% ( 32) 00:07:15.416 11746.068 - 11796.480: 86.6236% ( 24) 00:07:15.416 11796.480 - 11846.892: 86.7935% ( 25) 00:07:15.416 11846.892 - 11897.305: 86.9497% ( 23) 00:07:15.416 11897.305 - 11947.717: 87.1399% ( 28) 00:07:15.416 11947.717 - 11998.129: 87.3030% ( 24) 00:07:15.417 11998.129 - 12048.542: 87.4185% ( 17) 00:07:15.417 12048.542 - 12098.954: 87.5272% ( 16) 00:07:15.417 12098.954 - 12149.366: 87.6834% ( 23) 00:07:15.417 12149.366 - 12199.778: 87.8736% ( 28) 00:07:15.417 12199.778 - 12250.191: 88.1182% ( 36) 00:07:15.417 12250.191 - 12300.603: 88.2609% ( 21) 00:07:15.417 12300.603 - 12351.015: 88.4171% ( 23) 00:07:15.417 12351.015 - 12401.428: 88.6345% ( 32) 00:07:15.417 12401.428 - 12451.840: 88.9266% ( 43) 00:07:15.417 12451.840 - 12502.252: 89.2663% ( 50) 00:07:15.417 12502.252 - 12552.665: 89.5856% ( 47) 00:07:15.417 12552.665 - 12603.077: 89.8302% ( 36) 00:07:15.417 12603.077 - 12653.489: 90.0204% ( 28) 00:07:15.417 12653.489 - 12703.902: 90.1970% ( 26) 00:07:15.417 12703.902 - 12754.314: 90.3872% ( 28) 00:07:15.417 12754.314 - 12804.726: 90.6522% ( 39) 00:07:15.417 12804.726 - 12855.138: 90.9511% ( 44) 00:07:15.417 12855.138 - 12905.551: 91.2500% ( 44) 00:07:15.417 12905.551 - 13006.375: 91.8342% ( 86) 00:07:15.417 13006.375 - 13107.200: 92.3234% ( 72) 00:07:15.417 13107.200 - 13208.025: 92.7174% ( 58) 00:07:15.417 13208.025 - 13308.849: 93.0095% ( 43) 00:07:15.417 13308.849 - 13409.674: 93.4307% ( 62) 00:07:15.417 13409.674 - 13510.498: 93.6141% ( 27) 00:07:15.417 13510.498 - 13611.323: 93.7364% ( 18) 00:07:15.417 13611.323 - 13712.148: 93.8451% ( 16) 00:07:15.417 13712.148 - 13812.972: 93.8927% ( 7) 00:07:15.417 13812.972 - 13913.797: 93.9198% ( 4) 00:07:15.417 13913.797 - 14014.622: 93.9266% ( 1) 00:07:15.417 14014.622 - 14115.446: 93.9810% ( 8) 00:07:15.417 14115.446 - 14216.271: 94.1780% ( 29) 00:07:15.417 14216.271 - 14317.095: 94.4022% ( 33) 00:07:15.417 14317.095 - 14417.920: 94.8166% ( 61) 00:07:15.417 14417.920 - 14518.745: 95.1630% ( 51) 00:07:15.417 14518.745 - 14619.569: 95.3804% ( 32) 00:07:15.417 14619.569 - 14720.394: 95.6658% ( 42) 00:07:15.417 14720.394 - 14821.218: 96.0734% ( 60) 00:07:15.417 14821.218 - 14922.043: 96.3111% ( 35) 00:07:15.417 14922.043 - 15022.868: 96.5285% ( 32) 00:07:15.417 15022.868 - 15123.692: 96.6712% ( 21) 00:07:15.417 15123.692 - 15224.517: 96.7663% ( 14) 00:07:15.417 15224.517 - 15325.342: 96.8478% ( 12) 00:07:15.417 15325.342 - 15426.166: 96.9769% ( 19) 00:07:15.417 15426.166 - 15526.991: 97.1671% ( 28) 00:07:15.417 15526.991 - 15627.815: 97.3641% ( 29) 00:07:15.417 15627.815 - 15728.640: 97.5815% ( 32) 00:07:15.417 15728.640 - 15829.465: 97.8329% ( 37) 00:07:15.417 15829.465 - 15930.289: 98.0027% ( 25) 00:07:15.417 15930.289 - 16031.114: 98.1590% ( 23) 00:07:15.417 16031.114 - 16131.938: 98.3899% ( 34) 00:07:15.417 16131.938 - 16232.763: 98.5190% ( 19) 00:07:15.417 16232.763 - 16333.588: 98.6073% ( 13) 00:07:15.417 16333.588 - 16434.412: 98.6617% ( 8) 00:07:15.417 16434.412 - 16535.237: 98.6957% ( 5) 00:07:15.417 16636.062 - 16736.886: 98.7160% ( 3) 00:07:15.417 16736.886 - 16837.711: 98.7704% ( 8) 00:07:15.417 16837.711 - 16938.535: 98.8247% ( 8) 00:07:15.417 16938.535 - 17039.360: 98.8723% ( 7) 00:07:15.417 17039.360 - 17140.185: 98.9402% ( 10) 00:07:15.417 17140.185 - 17241.009: 98.9810% ( 6) 00:07:15.417 17241.009 - 17341.834: 99.0217% ( 6) 00:07:15.417 17341.834 - 17442.658: 99.0625% ( 6) 00:07:15.417 17442.658 - 17543.483: 99.0965% ( 5) 00:07:15.417 17543.483 - 17644.308: 99.1304% ( 5) 00:07:15.417 25004.505 - 25105.329: 99.1372% ( 1) 00:07:15.417 25105.329 - 25206.154: 99.1576% ( 3) 00:07:15.417 25206.154 - 25306.978: 99.1848% ( 4) 00:07:15.417 25306.978 - 25407.803: 99.2052% ( 3) 00:07:15.417 25407.803 - 25508.628: 99.2323% ( 4) 00:07:15.417 25508.628 - 25609.452: 99.2595% ( 4) 00:07:15.417 25609.452 - 25710.277: 99.2867% ( 4) 00:07:15.417 25710.277 - 25811.102: 99.3071% ( 3) 00:07:15.417 25811.102 - 26012.751: 99.3546% ( 7) 00:07:15.417 26012.751 - 26214.400: 99.4022% ( 7) 00:07:15.417 26214.400 - 26416.049: 99.4565% ( 8) 00:07:15.417 26416.049 - 26617.698: 99.5041% ( 7) 00:07:15.417 26617.698 - 26819.348: 99.5516% ( 7) 00:07:15.417 26819.348 - 27020.997: 99.5652% ( 2) 00:07:15.417 32868.825 - 33070.474: 99.5924% ( 4) 00:07:15.417 33070.474 - 33272.123: 99.6332% ( 6) 00:07:15.417 33272.123 - 33473.772: 99.6807% ( 7) 00:07:15.417 33473.772 - 33675.422: 99.7215% ( 6) 00:07:15.417 33675.422 - 33877.071: 99.7690% ( 7) 00:07:15.417 33877.071 - 34078.720: 99.8166% ( 7) 00:07:15.417 34078.720 - 34280.369: 99.8641% ( 7) 00:07:15.417 34280.369 - 34482.018: 99.9117% ( 7) 00:07:15.417 34482.018 - 34683.668: 99.9660% ( 8) 00:07:15.417 34683.668 - 34885.317: 100.0000% ( 5) 00:07:15.417 00:07:15.417 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:15.417 ============================================================================== 00:07:15.417 Range in us Cumulative IO count 00:07:15.417 5671.385 - 5696.591: 0.0068% ( 1) 00:07:15.417 5721.797 - 5747.003: 0.0135% ( 1) 00:07:15.417 5747.003 - 5772.209: 0.0203% ( 1) 00:07:15.417 5772.209 - 5797.415: 0.0406% ( 3) 00:07:15.417 5797.415 - 5822.622: 0.0609% ( 3) 00:07:15.417 5822.622 - 5847.828: 0.0879% ( 4) 00:07:15.417 5847.828 - 5873.034: 0.1015% ( 2) 00:07:15.417 5873.034 - 5898.240: 0.1285% ( 4) 00:07:15.417 5898.240 - 5923.446: 0.1759% ( 7) 00:07:15.417 5923.446 - 5948.652: 0.3585% ( 27) 00:07:15.417 5948.652 - 5973.858: 0.3923% ( 5) 00:07:15.417 5973.858 - 5999.065: 0.4329% ( 6) 00:07:15.417 5999.065 - 6024.271: 0.4667% ( 5) 00:07:15.417 6024.271 - 6049.477: 0.5073% ( 6) 00:07:15.417 6049.477 - 6074.683: 0.6020% ( 14) 00:07:15.417 6074.683 - 6099.889: 0.6696% ( 10) 00:07:15.417 6099.889 - 6125.095: 0.7440% ( 11) 00:07:15.417 6125.095 - 6150.302: 0.8726% ( 19) 00:07:15.417 6150.302 - 6175.508: 1.0349% ( 24) 00:07:15.417 6175.508 - 6200.714: 1.1364% ( 15) 00:07:15.417 6200.714 - 6225.920: 1.2514% ( 17) 00:07:15.417 6225.920 - 6251.126: 1.3934% ( 21) 00:07:15.417 6251.126 - 6276.332: 1.5354% ( 21) 00:07:15.417 6276.332 - 6301.538: 1.6437% ( 16) 00:07:15.417 6301.538 - 6326.745: 1.7925% ( 22) 00:07:15.417 6326.745 - 6351.951: 2.1307% ( 50) 00:07:15.417 6351.951 - 6377.157: 2.6515% ( 77) 00:07:15.417 6377.157 - 6402.363: 3.0574% ( 60) 00:07:15.417 6402.363 - 6427.569: 3.2332% ( 26) 00:07:15.417 6427.569 - 6452.775: 3.4497% ( 32) 00:07:15.417 6452.775 - 6503.188: 3.9705% ( 77) 00:07:15.417 6503.188 - 6553.600: 4.9581% ( 146) 00:07:15.417 6553.600 - 6604.012: 6.3312% ( 203) 00:07:15.417 6604.012 - 6654.425: 8.8339% ( 370) 00:07:15.417 6654.425 - 6704.837: 11.2825% ( 362) 00:07:15.417 6704.837 - 6755.249: 14.2113% ( 433) 00:07:15.417 6755.249 - 6805.662: 18.4997% ( 634) 00:07:15.417 6805.662 - 6856.074: 23.5322% ( 744) 00:07:15.417 6856.074 - 6906.486: 26.7857% ( 481) 00:07:15.417 6906.486 - 6956.898: 30.2422% ( 511) 00:07:15.417 6956.898 - 7007.311: 34.8688% ( 684) 00:07:15.417 7007.311 - 7057.723: 38.2238% ( 496) 00:07:15.417 7057.723 - 7108.135: 41.1391% ( 431) 00:07:15.417 7108.135 - 7158.548: 45.0081% ( 572) 00:07:15.417 7158.548 - 7208.960: 47.7611% ( 407) 00:07:15.417 7208.960 - 7259.372: 49.5603% ( 266) 00:07:15.417 7259.372 - 7309.785: 51.6301% ( 306) 00:07:15.417 7309.785 - 7360.197: 53.2332% ( 237) 00:07:15.417 7360.197 - 7410.609: 54.5928% ( 201) 00:07:15.417 7410.609 - 7461.022: 55.6412% ( 155) 00:07:15.417 7461.022 - 7511.434: 56.9535% ( 194) 00:07:15.417 7511.434 - 7561.846: 57.8937% ( 139) 00:07:15.417 7561.846 - 7612.258: 58.6648% ( 114) 00:07:15.417 7612.258 - 7662.671: 59.2600% ( 88) 00:07:15.417 7662.671 - 7713.083: 60.0785% ( 121) 00:07:15.417 7713.083 - 7763.495: 60.9916% ( 135) 00:07:15.417 7763.495 - 7813.908: 61.5598% ( 84) 00:07:15.417 7813.908 - 7864.320: 61.9318% ( 55) 00:07:15.417 7864.320 - 7914.732: 62.3850% ( 67) 00:07:15.417 7914.732 - 7965.145: 62.9735% ( 87) 00:07:15.417 7965.145 - 8015.557: 63.4673% ( 73) 00:07:15.417 8015.557 - 8065.969: 64.0084% ( 80) 00:07:15.417 8065.969 - 8116.382: 64.4075% ( 59) 00:07:15.417 8116.382 - 8166.794: 64.7592% ( 52) 00:07:15.417 8166.794 - 8217.206: 64.9892% ( 34) 00:07:15.417 8217.206 - 8267.618: 65.1583% ( 25) 00:07:15.417 8267.618 - 8318.031: 65.4085% ( 37) 00:07:15.417 8318.031 - 8368.443: 65.7197% ( 46) 00:07:15.417 8368.443 - 8418.855: 65.9835% ( 39) 00:07:15.417 8418.855 - 8469.268: 66.5111% ( 78) 00:07:15.417 8469.268 - 8519.680: 67.0860% ( 85) 00:07:15.418 8519.680 - 8570.092: 67.4851% ( 59) 00:07:15.418 8570.092 - 8620.505: 68.0330% ( 81) 00:07:15.418 8620.505 - 8670.917: 68.3374% ( 45) 00:07:15.418 8670.917 - 8721.329: 68.7432% ( 60) 00:07:15.418 8721.329 - 8771.742: 69.2302% ( 72) 00:07:15.418 8771.742 - 8822.154: 69.5346% ( 45) 00:07:15.418 8822.154 - 8872.566: 69.8187% ( 42) 00:07:15.418 8872.566 - 8922.978: 70.3396% ( 77) 00:07:15.418 8922.978 - 8973.391: 70.9077% ( 84) 00:07:15.418 8973.391 - 9023.803: 71.5030% ( 88) 00:07:15.418 9023.803 - 9074.215: 72.1117% ( 90) 00:07:15.418 9074.215 - 9124.628: 72.3958% ( 42) 00:07:15.418 9124.628 - 9175.040: 72.6596% ( 39) 00:07:15.418 9175.040 - 9225.452: 72.9640% ( 45) 00:07:15.418 9225.452 - 9275.865: 73.2684% ( 45) 00:07:15.418 9275.865 - 9326.277: 73.4578% ( 28) 00:07:15.418 9326.277 - 9376.689: 73.7419% ( 42) 00:07:15.418 9376.689 - 9427.102: 74.1883% ( 66) 00:07:15.418 9427.102 - 9477.514: 74.4521% ( 39) 00:07:15.418 9477.514 - 9527.926: 74.7971% ( 51) 00:07:15.418 9527.926 - 9578.338: 75.1082% ( 46) 00:07:15.418 9578.338 - 9628.751: 75.4667% ( 53) 00:07:15.418 9628.751 - 9679.163: 75.8658% ( 59) 00:07:15.418 9679.163 - 9729.575: 76.1093% ( 36) 00:07:15.418 9729.575 - 9779.988: 76.3190% ( 31) 00:07:15.418 9779.988 - 9830.400: 76.7654% ( 66) 00:07:15.418 9830.400 - 9880.812: 76.9413% ( 26) 00:07:15.418 9880.812 - 9931.225: 77.0766% ( 20) 00:07:15.418 9931.225 - 9981.637: 77.2254% ( 22) 00:07:15.418 9981.637 - 10032.049: 77.4486% ( 33) 00:07:15.418 10032.049 - 10082.462: 77.7192% ( 40) 00:07:15.418 10082.462 - 10132.874: 77.9627% ( 36) 00:07:15.418 10132.874 - 10183.286: 78.2197% ( 38) 00:07:15.418 10183.286 - 10233.698: 78.5579% ( 50) 00:07:15.418 10233.698 - 10284.111: 78.9232% ( 54) 00:07:15.418 10284.111 - 10334.523: 79.1126% ( 28) 00:07:15.418 10334.523 - 10384.935: 79.3561% ( 36) 00:07:15.418 10384.935 - 10435.348: 79.6943% ( 50) 00:07:15.418 10435.348 - 10485.760: 80.0122% ( 47) 00:07:15.418 10485.760 - 10536.172: 80.4248% ( 61) 00:07:15.418 10536.172 - 10586.585: 80.7156% ( 43) 00:07:15.418 10586.585 - 10636.997: 81.0335% ( 47) 00:07:15.418 10636.997 - 10687.409: 81.4123% ( 56) 00:07:15.418 10687.409 - 10737.822: 81.6694% ( 38) 00:07:15.418 10737.822 - 10788.234: 81.8994% ( 34) 00:07:15.418 10788.234 - 10838.646: 82.1834% ( 42) 00:07:15.418 10838.646 - 10889.058: 82.4269% ( 36) 00:07:15.418 10889.058 - 10939.471: 82.6096% ( 27) 00:07:15.418 10939.471 - 10989.883: 82.7787% ( 25) 00:07:15.418 10989.883 - 11040.295: 82.9545% ( 26) 00:07:15.418 11040.295 - 11090.708: 83.1034% ( 22) 00:07:15.418 11090.708 - 11141.120: 83.2454% ( 21) 00:07:15.418 11141.120 - 11191.532: 83.4010% ( 23) 00:07:15.418 11191.532 - 11241.945: 83.5430% ( 21) 00:07:15.418 11241.945 - 11292.357: 83.7121% ( 25) 00:07:15.418 11292.357 - 11342.769: 83.9353% ( 33) 00:07:15.418 11342.769 - 11393.182: 84.1247% ( 28) 00:07:15.418 11393.182 - 11443.594: 84.3141% ( 28) 00:07:15.418 11443.594 - 11494.006: 84.4426% ( 19) 00:07:15.418 11494.006 - 11544.418: 84.6591% ( 32) 00:07:15.418 11544.418 - 11594.831: 84.9364% ( 41) 00:07:15.418 11594.831 - 11645.243: 85.0717% ( 20) 00:07:15.418 11645.243 - 11695.655: 85.2002% ( 19) 00:07:15.418 11695.655 - 11746.068: 85.4167% ( 32) 00:07:15.418 11746.068 - 11796.480: 85.6264% ( 31) 00:07:15.418 11796.480 - 11846.892: 85.7955% ( 25) 00:07:15.418 11846.892 - 11897.305: 85.9713% ( 26) 00:07:15.418 11897.305 - 11947.717: 86.1404% ( 25) 00:07:15.418 11947.717 - 11998.129: 86.3907% ( 37) 00:07:15.418 11998.129 - 12048.542: 86.7221% ( 49) 00:07:15.418 12048.542 - 12098.954: 86.8980% ( 26) 00:07:15.418 12098.954 - 12149.366: 87.0603% ( 24) 00:07:15.418 12149.366 - 12199.778: 87.2159% ( 23) 00:07:15.418 12199.778 - 12250.191: 87.4188% ( 30) 00:07:15.418 12250.191 - 12300.603: 87.6353% ( 32) 00:07:15.418 12300.603 - 12351.015: 87.8991% ( 39) 00:07:15.418 12351.015 - 12401.428: 88.1291% ( 34) 00:07:15.418 12401.428 - 12451.840: 88.3387% ( 31) 00:07:15.418 12451.840 - 12502.252: 88.5078% ( 25) 00:07:15.418 12502.252 - 12552.665: 88.7311% ( 33) 00:07:15.418 12552.665 - 12603.077: 88.9340% ( 30) 00:07:15.418 12603.077 - 12653.489: 89.1166% ( 27) 00:07:15.418 12653.489 - 12703.902: 89.3398% ( 33) 00:07:15.418 12703.902 - 12754.314: 89.5427% ( 30) 00:07:15.418 12754.314 - 12804.726: 89.7727% ( 34) 00:07:15.418 12804.726 - 12855.138: 89.9283% ( 23) 00:07:15.418 12855.138 - 12905.551: 90.1177% ( 28) 00:07:15.418 12905.551 - 13006.375: 90.6385% ( 77) 00:07:15.418 13006.375 - 13107.200: 91.1526% ( 76) 00:07:15.418 13107.200 - 13208.025: 91.6464% ( 73) 00:07:15.418 13208.025 - 13308.849: 92.0522% ( 60) 00:07:15.418 13308.849 - 13409.674: 92.5189% ( 69) 00:07:15.418 13409.674 - 13510.498: 92.8842% ( 54) 00:07:15.418 13510.498 - 13611.323: 93.1277% ( 36) 00:07:15.418 13611.323 - 13712.148: 93.2562% ( 19) 00:07:15.418 13712.148 - 13812.972: 93.3509% ( 14) 00:07:15.418 13812.972 - 13913.797: 93.4524% ( 15) 00:07:15.418 13913.797 - 14014.622: 93.5606% ( 16) 00:07:15.418 14014.622 - 14115.446: 93.7094% ( 22) 00:07:15.418 14115.446 - 14216.271: 94.1220% ( 61) 00:07:15.418 14216.271 - 14317.095: 94.3182% ( 29) 00:07:15.418 14317.095 - 14417.920: 94.5617% ( 36) 00:07:15.418 14417.920 - 14518.745: 94.9202% ( 53) 00:07:15.418 14518.745 - 14619.569: 95.2043% ( 42) 00:07:15.418 14619.569 - 14720.394: 95.5763% ( 55) 00:07:15.418 14720.394 - 14821.218: 95.9145% ( 50) 00:07:15.418 14821.218 - 14922.043: 96.2798% ( 54) 00:07:15.418 14922.043 - 15022.868: 96.6180% ( 50) 00:07:15.418 15022.868 - 15123.692: 96.8953% ( 41) 00:07:15.418 15123.692 - 15224.517: 97.1456% ( 37) 00:07:15.418 15224.517 - 15325.342: 97.3282% ( 27) 00:07:15.418 15325.342 - 15426.166: 97.4905% ( 24) 00:07:15.418 15426.166 - 15526.991: 97.6799% ( 28) 00:07:15.418 15526.991 - 15627.815: 97.8220% ( 21) 00:07:15.418 15627.815 - 15728.640: 98.0114% ( 28) 00:07:15.418 15728.640 - 15829.465: 98.1669% ( 23) 00:07:15.418 15829.465 - 15930.289: 98.3293% ( 24) 00:07:15.418 15930.289 - 16031.114: 98.4713% ( 21) 00:07:15.418 16031.114 - 16131.938: 98.5390% ( 10) 00:07:15.418 16131.938 - 16232.763: 98.5931% ( 8) 00:07:15.418 16232.763 - 16333.588: 98.6810% ( 13) 00:07:15.418 16333.588 - 16434.412: 98.7825% ( 15) 00:07:15.418 16434.412 - 16535.237: 98.8433% ( 9) 00:07:15.418 16535.237 - 16636.062: 98.9110% ( 10) 00:07:15.418 16636.062 - 16736.886: 98.9922% ( 12) 00:07:15.418 16736.886 - 16837.711: 99.0260% ( 5) 00:07:15.418 16837.711 - 16938.535: 99.0598% ( 5) 00:07:15.418 16938.535 - 17039.360: 99.1004% ( 6) 00:07:15.418 17039.360 - 17140.185: 99.1342% ( 5) 00:07:15.418 17241.009 - 17341.834: 99.1410% ( 1) 00:07:15.418 17341.834 - 17442.658: 99.1613% ( 3) 00:07:15.418 17442.658 - 17543.483: 99.1815% ( 3) 00:07:15.418 17543.483 - 17644.308: 99.2018% ( 3) 00:07:15.418 17644.308 - 17745.132: 99.2289% ( 4) 00:07:15.418 17745.132 - 17845.957: 99.2560% ( 4) 00:07:15.418 17845.957 - 17946.782: 99.2762% ( 3) 00:07:15.418 17946.782 - 18047.606: 99.3033% ( 4) 00:07:15.418 18047.606 - 18148.431: 99.3304% ( 4) 00:07:15.418 18148.431 - 18249.255: 99.3574% ( 4) 00:07:15.418 18249.255 - 18350.080: 99.3845% ( 4) 00:07:15.418 18350.080 - 18450.905: 99.4115% ( 4) 00:07:15.418 18450.905 - 18551.729: 99.4318% ( 3) 00:07:15.418 18551.729 - 18652.554: 99.4589% ( 4) 00:07:15.418 18652.554 - 18753.378: 99.4859% ( 4) 00:07:15.418 18753.378 - 18854.203: 99.5062% ( 3) 00:07:15.418 18854.203 - 18955.028: 99.5333% ( 4) 00:07:15.418 18955.028 - 19055.852: 99.5536% ( 3) 00:07:15.418 19055.852 - 19156.677: 99.5671% ( 2) 00:07:15.418 24500.382 - 24601.206: 99.5874% ( 3) 00:07:15.418 24601.206 - 24702.031: 99.6009% ( 2) 00:07:15.418 24702.031 - 24802.855: 99.6144% ( 2) 00:07:15.418 24802.855 - 24903.680: 99.6280% ( 2) 00:07:15.418 24903.680 - 25004.505: 99.6483% ( 3) 00:07:15.418 25004.505 - 25105.329: 99.6618% ( 2) 00:07:15.418 25105.329 - 25206.154: 99.6821% ( 3) 00:07:15.418 25206.154 - 25306.978: 99.6956% ( 2) 00:07:15.418 25306.978 - 25407.803: 99.7159% ( 3) 00:07:15.418 25407.803 - 25508.628: 99.7294% ( 2) 00:07:15.418 25508.628 - 25609.452: 99.7362% ( 1) 00:07:15.418 25609.452 - 25710.277: 99.7497% ( 2) 00:07:15.418 25710.277 - 25811.102: 99.7633% ( 2) 00:07:15.418 25811.102 - 26012.751: 99.8106% ( 7) 00:07:15.418 26012.751 - 26214.400: 99.8580% ( 7) 00:07:15.418 26214.400 - 26416.049: 99.9053% ( 7) 00:07:15.418 26416.049 - 26617.698: 99.9594% ( 8) 00:07:15.418 26617.698 - 26819.348: 100.0000% ( 6) 00:07:15.418 00:07:15.418 17:37:04 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:15.418 00:07:15.418 real 0m2.528s 00:07:15.418 user 0m2.207s 00:07:15.418 sys 0m0.214s 00:07:15.418 17:37:04 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.418 17:37:04 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:15.418 ************************************ 00:07:15.418 END TEST nvme_perf 00:07:15.419 ************************************ 00:07:15.419 17:37:04 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:15.419 17:37:04 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:15.419 17:37:04 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.419 17:37:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:15.419 ************************************ 00:07:15.419 START TEST nvme_hello_world 00:07:15.419 ************************************ 00:07:15.419 17:37:04 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:15.419 Initializing NVMe Controllers 00:07:15.419 Attached to 0000:00:10.0 00:07:15.419 Namespace ID: 1 size: 6GB 00:07:15.419 Attached to 0000:00:11.0 00:07:15.419 Namespace ID: 1 size: 5GB 00:07:15.419 Attached to 0000:00:13.0 00:07:15.419 Namespace ID: 1 size: 1GB 00:07:15.419 Attached to 0000:00:12.0 00:07:15.419 Namespace ID: 1 size: 4GB 00:07:15.419 Namespace ID: 2 size: 4GB 00:07:15.419 Namespace ID: 3 size: 4GB 00:07:15.419 Initialization complete. 00:07:15.419 INFO: using host memory buffer for IO 00:07:15.419 Hello world! 00:07:15.419 INFO: using host memory buffer for IO 00:07:15.419 Hello world! 00:07:15.419 INFO: using host memory buffer for IO 00:07:15.419 Hello world! 00:07:15.419 INFO: using host memory buffer for IO 00:07:15.419 Hello world! 00:07:15.419 INFO: using host memory buffer for IO 00:07:15.419 Hello world! 00:07:15.419 INFO: using host memory buffer for IO 00:07:15.419 Hello world! 00:07:15.419 00:07:15.419 real 0m0.212s 00:07:15.419 user 0m0.077s 00:07:15.419 sys 0m0.092s 00:07:15.419 17:37:05 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.419 17:37:05 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:15.419 ************************************ 00:07:15.419 END TEST nvme_hello_world 00:07:15.419 ************************************ 00:07:15.419 17:37:05 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:15.419 17:37:05 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:15.419 17:37:05 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.419 17:37:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:15.419 ************************************ 00:07:15.419 START TEST nvme_sgl 00:07:15.419 ************************************ 00:07:15.419 17:37:05 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:15.677 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:15.677 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:15.677 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:15.677 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:15.677 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:15.677 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:15.677 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:15.677 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:15.677 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:15.677 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:15.677 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:15.677 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:15.677 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:15.677 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:15.677 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:15.677 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:15.677 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:15.677 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:15.678 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:15.678 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:15.678 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:15.678 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:15.678 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:15.678 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:15.678 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:15.678 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:15.678 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:15.678 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:15.678 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:15.678 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:15.678 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:15.678 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:15.678 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:15.678 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:15.678 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:15.678 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:15.678 NVMe Readv/Writev Request test 00:07:15.678 Attached to 0000:00:10.0 00:07:15.678 Attached to 0000:00:11.0 00:07:15.678 Attached to 0000:00:13.0 00:07:15.678 Attached to 0000:00:12.0 00:07:15.678 0000:00:10.0: build_io_request_2 test passed 00:07:15.678 0000:00:10.0: build_io_request_4 test passed 00:07:15.678 0000:00:10.0: build_io_request_5 test passed 00:07:15.678 0000:00:10.0: build_io_request_6 test passed 00:07:15.678 0000:00:10.0: build_io_request_7 test passed 00:07:15.678 0000:00:10.0: build_io_request_10 test passed 00:07:15.678 0000:00:11.0: build_io_request_2 test passed 00:07:15.678 0000:00:11.0: build_io_request_4 test passed 00:07:15.678 0000:00:11.0: build_io_request_5 test passed 00:07:15.678 0000:00:11.0: build_io_request_6 test passed 00:07:15.678 0000:00:11.0: build_io_request_7 test passed 00:07:15.678 0000:00:11.0: build_io_request_10 test passed 00:07:15.678 Cleaning up... 00:07:15.678 00:07:15.678 real 0m0.275s 00:07:15.678 user 0m0.133s 00:07:15.678 sys 0m0.100s 00:07:15.678 17:37:05 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.678 17:37:05 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:15.678 ************************************ 00:07:15.678 END TEST nvme_sgl 00:07:15.678 ************************************ 00:07:15.936 17:37:05 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:15.936 17:37:05 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:15.936 17:37:05 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.936 17:37:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:15.936 ************************************ 00:07:15.936 START TEST nvme_e2edp 00:07:15.936 ************************************ 00:07:15.936 17:37:05 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:15.936 NVMe Write/Read with End-to-End data protection test 00:07:15.936 Attached to 0000:00:10.0 00:07:15.936 Attached to 0000:00:11.0 00:07:15.936 Attached to 0000:00:13.0 00:07:15.936 Attached to 0000:00:12.0 00:07:15.936 Cleaning up... 00:07:15.936 00:07:15.936 real 0m0.208s 00:07:15.936 user 0m0.071s 00:07:15.936 sys 0m0.090s 00:07:15.936 17:37:05 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.936 17:37:05 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:15.936 ************************************ 00:07:15.936 END TEST nvme_e2edp 00:07:15.936 ************************************ 00:07:15.936 17:37:05 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:15.936 17:37:05 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:15.936 17:37:05 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.936 17:37:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:15.936 ************************************ 00:07:15.936 START TEST nvme_reserve 00:07:15.936 ************************************ 00:07:15.936 17:37:05 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:16.194 ===================================================== 00:07:16.194 NVMe Controller at PCI bus 0, device 16, function 0 00:07:16.194 ===================================================== 00:07:16.194 Reservations: Not Supported 00:07:16.194 ===================================================== 00:07:16.194 NVMe Controller at PCI bus 0, device 17, function 0 00:07:16.194 ===================================================== 00:07:16.194 Reservations: Not Supported 00:07:16.194 ===================================================== 00:07:16.194 NVMe Controller at PCI bus 0, device 19, function 0 00:07:16.194 ===================================================== 00:07:16.194 Reservations: Not Supported 00:07:16.194 ===================================================== 00:07:16.194 NVMe Controller at PCI bus 0, device 18, function 0 00:07:16.194 ===================================================== 00:07:16.194 Reservations: Not Supported 00:07:16.194 Reservation test passed 00:07:16.194 00:07:16.194 real 0m0.200s 00:07:16.194 user 0m0.072s 00:07:16.194 sys 0m0.083s 00:07:16.194 17:37:05 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.194 17:37:05 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:16.194 ************************************ 00:07:16.194 END TEST nvme_reserve 00:07:16.194 ************************************ 00:07:16.194 17:37:05 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:16.194 17:37:05 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:16.194 17:37:05 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.194 17:37:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:16.194 ************************************ 00:07:16.194 START TEST nvme_err_injection 00:07:16.194 ************************************ 00:07:16.194 17:37:05 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:16.452 NVMe Error Injection test 00:07:16.452 Attached to 0000:00:10.0 00:07:16.452 Attached to 0000:00:11.0 00:07:16.452 Attached to 0000:00:13.0 00:07:16.452 Attached to 0000:00:12.0 00:07:16.452 0000:00:11.0: get features failed as expected 00:07:16.452 0000:00:13.0: get features failed as expected 00:07:16.452 0000:00:12.0: get features failed as expected 00:07:16.452 0000:00:10.0: get features failed as expected 00:07:16.452 0000:00:10.0: get features successfully as expected 00:07:16.452 0000:00:11.0: get features successfully as expected 00:07:16.452 0000:00:13.0: get features successfully as expected 00:07:16.452 0000:00:12.0: get features successfully as expected 00:07:16.452 0000:00:10.0: read failed as expected 00:07:16.452 0000:00:11.0: read failed as expected 00:07:16.452 0000:00:13.0: read failed as expected 00:07:16.452 0000:00:12.0: read failed as expected 00:07:16.452 0000:00:10.0: read successfully as expected 00:07:16.452 0000:00:11.0: read successfully as expected 00:07:16.452 0000:00:13.0: read successfully as expected 00:07:16.452 0000:00:12.0: read successfully as expected 00:07:16.452 Cleaning up... 00:07:16.452 00:07:16.452 real 0m0.221s 00:07:16.452 user 0m0.076s 00:07:16.452 sys 0m0.096s 00:07:16.452 17:37:06 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.452 ************************************ 00:07:16.452 END TEST nvme_err_injection 00:07:16.452 ************************************ 00:07:16.452 17:37:06 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:16.452 17:37:06 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:16.452 17:37:06 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:07:16.452 17:37:06 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.452 17:37:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:16.452 ************************************ 00:07:16.452 START TEST nvme_overhead 00:07:16.452 ************************************ 00:07:16.452 17:37:06 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:17.834 Initializing NVMe Controllers 00:07:17.834 Attached to 0000:00:10.0 00:07:17.834 Attached to 0000:00:11.0 00:07:17.834 Attached to 0000:00:13.0 00:07:17.834 Attached to 0000:00:12.0 00:07:17.834 Initialization complete. Launching workers. 00:07:17.834 submit (in ns) avg, min, max = 12060.6, 11005.4, 127670.8 00:07:17.834 complete (in ns) avg, min, max = 8177.8, 7290.0, 468953.8 00:07:17.834 00:07:17.834 Submit histogram 00:07:17.834 ================ 00:07:17.834 Range in us Cumulative Count 00:07:17.834 10.978 - 11.028: 0.0076% ( 1) 00:07:17.834 11.028 - 11.077: 0.2047% ( 26) 00:07:17.834 11.077 - 11.126: 1.0160% ( 107) 00:07:17.834 11.126 - 11.175: 4.1171% ( 409) 00:07:17.834 11.175 - 11.225: 9.1288% ( 661) 00:07:17.834 11.225 - 11.274: 15.6873% ( 865) 00:07:17.834 11.274 - 11.323: 23.2770% ( 1001) 00:07:17.834 11.323 - 11.372: 30.4193% ( 942) 00:07:17.834 11.372 - 11.422: 36.2954% ( 775) 00:07:17.834 11.422 - 11.471: 40.8371% ( 599) 00:07:17.834 11.471 - 11.520: 43.7258% ( 381) 00:07:17.835 11.520 - 11.569: 45.7957% ( 273) 00:07:17.835 11.569 - 11.618: 47.7064% ( 252) 00:07:17.835 11.618 - 11.668: 50.3071% ( 343) 00:07:17.835 11.668 - 11.717: 53.7721% ( 457) 00:07:17.835 11.717 - 11.766: 57.9271% ( 548) 00:07:17.835 11.766 - 11.815: 62.3095% ( 578) 00:07:17.835 11.815 - 11.865: 66.7905% ( 591) 00:07:17.835 11.865 - 11.914: 71.5141% ( 623) 00:07:17.835 11.914 - 11.963: 75.4113% ( 514) 00:07:17.835 11.963 - 12.012: 78.4138% ( 396) 00:07:17.835 12.012 - 12.062: 80.8856% ( 326) 00:07:17.835 12.062 - 12.111: 82.5082% ( 214) 00:07:17.835 12.111 - 12.160: 83.6379% ( 149) 00:07:17.835 12.160 - 12.209: 84.6008% ( 127) 00:07:17.835 12.209 - 12.258: 85.4348% ( 110) 00:07:17.835 12.258 - 12.308: 86.0945% ( 87) 00:07:17.835 12.308 - 12.357: 86.7314% ( 84) 00:07:17.835 12.357 - 12.406: 87.1939% ( 61) 00:07:17.835 12.406 - 12.455: 87.6943% ( 66) 00:07:17.835 12.455 - 12.505: 88.1416% ( 59) 00:07:17.835 12.505 - 12.554: 88.7103% ( 75) 00:07:17.835 12.554 - 12.603: 89.3472% ( 84) 00:07:17.835 12.603 - 12.702: 90.7120% ( 180) 00:07:17.835 12.702 - 12.800: 92.1980% ( 196) 00:07:17.835 12.800 - 12.898: 93.3808% ( 156) 00:07:17.835 12.898 - 12.997: 94.1770% ( 105) 00:07:17.835 12.997 - 13.095: 94.7532% ( 76) 00:07:17.835 13.095 - 13.194: 95.1475% ( 52) 00:07:17.835 13.194 - 13.292: 95.2915% ( 19) 00:07:17.835 13.292 - 13.391: 95.4356% ( 19) 00:07:17.835 13.391 - 13.489: 95.5114% ( 10) 00:07:17.835 13.489 - 13.588: 95.6024% ( 12) 00:07:17.835 13.588 - 13.686: 95.6782% ( 10) 00:07:17.835 13.686 - 13.785: 95.7389% ( 8) 00:07:17.835 13.785 - 13.883: 95.7919% ( 7) 00:07:17.835 13.883 - 13.982: 95.8981% ( 14) 00:07:17.835 13.982 - 14.080: 96.0118% ( 15) 00:07:17.835 14.080 - 14.178: 96.1938% ( 24) 00:07:17.835 14.178 - 14.277: 96.3303% ( 18) 00:07:17.835 14.277 - 14.375: 96.4819% ( 20) 00:07:17.835 14.375 - 14.474: 96.5426% ( 8) 00:07:17.835 14.474 - 14.572: 96.5805% ( 5) 00:07:17.835 14.572 - 14.671: 96.6942% ( 15) 00:07:17.835 14.671 - 14.769: 96.7852% ( 12) 00:07:17.835 14.769 - 14.868: 96.8231% ( 5) 00:07:17.835 14.868 - 14.966: 96.8610% ( 5) 00:07:17.835 14.966 - 15.065: 96.9368% ( 10) 00:07:17.835 15.065 - 15.163: 97.0127% ( 10) 00:07:17.835 15.163 - 15.262: 97.0506% ( 5) 00:07:17.835 15.262 - 15.360: 97.0657% ( 2) 00:07:17.835 15.360 - 15.458: 97.1036% ( 5) 00:07:17.835 15.458 - 15.557: 97.1416% ( 5) 00:07:17.835 15.557 - 15.655: 97.1795% ( 5) 00:07:17.835 15.655 - 15.754: 97.2098% ( 4) 00:07:17.835 15.754 - 15.852: 97.2325% ( 3) 00:07:17.835 15.852 - 15.951: 97.2780% ( 6) 00:07:17.835 15.951 - 16.049: 97.2856% ( 1) 00:07:17.835 16.049 - 16.148: 97.3235% ( 5) 00:07:17.835 16.148 - 16.246: 97.3690% ( 6) 00:07:17.835 16.246 - 16.345: 97.4069% ( 5) 00:07:17.835 16.345 - 16.443: 97.4221% ( 2) 00:07:17.835 16.443 - 16.542: 97.4448% ( 3) 00:07:17.835 16.542 - 16.640: 97.5055% ( 8) 00:07:17.835 16.640 - 16.738: 97.5282% ( 3) 00:07:17.835 16.738 - 16.837: 97.5510% ( 3) 00:07:17.835 17.034 - 17.132: 97.5662% ( 2) 00:07:17.835 17.132 - 17.231: 97.5813% ( 2) 00:07:17.835 17.231 - 17.329: 97.6344% ( 7) 00:07:17.835 17.329 - 17.428: 97.6647% ( 4) 00:07:17.835 17.428 - 17.526: 97.7481% ( 11) 00:07:17.835 17.526 - 17.625: 97.8543% ( 14) 00:07:17.835 17.625 - 17.723: 97.9983% ( 19) 00:07:17.835 17.723 - 17.822: 98.0666% ( 9) 00:07:17.835 17.822 - 17.920: 98.1576% ( 12) 00:07:17.835 17.920 - 18.018: 98.2182% ( 8) 00:07:17.835 18.018 - 18.117: 98.3092% ( 12) 00:07:17.835 18.117 - 18.215: 98.4002% ( 12) 00:07:17.835 18.215 - 18.314: 98.4381% ( 5) 00:07:17.835 18.314 - 18.412: 98.4457% ( 1) 00:07:17.835 18.412 - 18.511: 98.4987% ( 7) 00:07:17.835 18.511 - 18.609: 98.5215% ( 3) 00:07:17.835 18.609 - 18.708: 98.5594% ( 5) 00:07:17.835 18.708 - 18.806: 98.5973% ( 5) 00:07:17.835 18.806 - 18.905: 98.6807% ( 11) 00:07:17.835 18.905 - 19.003: 98.7186% ( 5) 00:07:17.835 19.003 - 19.102: 98.7490% ( 4) 00:07:17.835 19.102 - 19.200: 98.7869% ( 5) 00:07:17.835 19.200 - 19.298: 98.8020% ( 2) 00:07:17.835 19.298 - 19.397: 98.8172% ( 2) 00:07:17.835 19.397 - 19.495: 98.8399% ( 3) 00:07:17.835 19.495 - 19.594: 98.8551% ( 2) 00:07:17.835 19.791 - 19.889: 98.8703% ( 2) 00:07:17.835 19.988 - 20.086: 98.8930% ( 3) 00:07:17.835 20.086 - 20.185: 98.9082% ( 2) 00:07:17.835 20.283 - 20.382: 98.9233% ( 2) 00:07:17.835 20.578 - 20.677: 98.9385% ( 2) 00:07:17.835 20.677 - 20.775: 98.9537% ( 2) 00:07:17.835 20.775 - 20.874: 98.9613% ( 1) 00:07:17.835 20.972 - 21.071: 98.9916% ( 4) 00:07:17.835 21.268 - 21.366: 98.9992% ( 1) 00:07:17.835 21.465 - 21.563: 99.0067% ( 1) 00:07:17.835 21.760 - 21.858: 99.0143% ( 1) 00:07:17.835 21.858 - 21.957: 99.0219% ( 1) 00:07:17.835 21.957 - 22.055: 99.0295% ( 1) 00:07:17.835 22.055 - 22.154: 99.0447% ( 2) 00:07:17.835 22.154 - 22.252: 99.0598% ( 2) 00:07:17.835 22.843 - 22.942: 99.0674% ( 1) 00:07:17.835 23.138 - 23.237: 99.0750% ( 1) 00:07:17.835 23.237 - 23.335: 99.0826% ( 1) 00:07:17.835 24.123 - 24.222: 99.0902% ( 1) 00:07:17.835 25.206 - 25.403: 99.0977% ( 1) 00:07:17.835 25.994 - 26.191: 99.1053% ( 1) 00:07:17.835 26.191 - 26.388: 99.1205% ( 2) 00:07:17.835 27.963 - 28.160: 99.1281% ( 1) 00:07:17.835 29.932 - 30.129: 99.2342% ( 14) 00:07:17.835 30.129 - 30.326: 99.4996% ( 35) 00:07:17.835 30.326 - 30.523: 99.6588% ( 21) 00:07:17.835 30.523 - 30.720: 99.7422% ( 11) 00:07:17.835 30.720 - 30.917: 99.7725% ( 4) 00:07:17.835 30.917 - 31.114: 99.7953% ( 3) 00:07:17.835 31.114 - 31.311: 99.8029% ( 1) 00:07:17.835 31.311 - 31.508: 99.8104% ( 1) 00:07:17.835 31.508 - 31.705: 99.8256% ( 2) 00:07:17.835 31.705 - 31.902: 99.8408% ( 2) 00:07:17.835 32.295 - 32.492: 99.8484% ( 1) 00:07:17.835 32.492 - 32.689: 99.8559% ( 1) 00:07:17.835 32.886 - 33.083: 99.8711% ( 2) 00:07:17.835 34.855 - 35.052: 99.8787% ( 1) 00:07:17.835 38.991 - 39.188: 99.8863% ( 1) 00:07:17.835 39.188 - 39.385: 99.9014% ( 2) 00:07:17.835 41.945 - 42.142: 99.9090% ( 1) 00:07:17.835 43.914 - 44.111: 99.9166% ( 1) 00:07:17.835 45.489 - 45.686: 99.9242% ( 1) 00:07:17.835 45.686 - 45.883: 99.9318% ( 1) 00:07:17.835 47.458 - 47.655: 99.9393% ( 1) 00:07:17.835 48.246 - 48.443: 99.9469% ( 1) 00:07:17.835 48.837 - 49.034: 99.9545% ( 1) 00:07:17.835 50.806 - 51.200: 99.9697% ( 2) 00:07:17.835 61.834 - 62.228: 99.9773% ( 1) 00:07:17.835 71.286 - 71.680: 99.9848% ( 1) 00:07:17.835 96.886 - 97.280: 99.9924% ( 1) 00:07:17.835 127.606 - 128.394: 100.0000% ( 1) 00:07:17.835 00:07:17.835 Complete histogram 00:07:17.835 ================== 00:07:17.835 Range in us Cumulative Count 00:07:17.835 7.286 - 7.335: 0.0682% ( 9) 00:07:17.835 7.335 - 7.385: 0.3260% ( 34) 00:07:17.835 7.385 - 7.434: 1.3117% ( 130) 00:07:17.835 7.434 - 7.483: 4.7843% ( 458) 00:07:17.835 7.483 - 7.532: 10.9030% ( 807) 00:07:17.835 7.532 - 7.582: 19.9257% ( 1190) 00:07:17.835 7.582 - 7.631: 30.4117% ( 1383) 00:07:17.835 7.631 - 7.680: 38.1151% ( 1016) 00:07:17.835 7.680 - 7.729: 42.5127% ( 580) 00:07:17.835 7.729 - 7.778: 45.1209% ( 344) 00:07:17.835 7.778 - 7.828: 48.9120% ( 500) 00:07:17.835 7.828 - 7.877: 54.7123% ( 765) 00:07:17.835 7.877 - 7.926: 60.4367% ( 755) 00:07:17.835 7.926 - 7.975: 66.0778% ( 744) 00:07:17.835 7.975 - 8.025: 72.8562% ( 894) 00:07:17.835 8.025 - 8.074: 78.9901% ( 809) 00:07:17.835 8.074 - 8.123: 83.9715% ( 657) 00:07:17.835 8.123 - 8.172: 87.9369% ( 523) 00:07:17.835 8.172 - 8.222: 90.8181% ( 380) 00:07:17.835 8.222 - 8.271: 92.6757% ( 245) 00:07:17.835 8.271 - 8.320: 93.8509% ( 155) 00:07:17.835 8.320 - 8.369: 94.7608% ( 120) 00:07:17.835 8.369 - 8.418: 95.3446% ( 77) 00:07:17.835 8.418 - 8.468: 95.7237% ( 50) 00:07:17.835 8.468 - 8.517: 95.9739% ( 33) 00:07:17.835 8.517 - 8.566: 96.1104% ( 18) 00:07:17.835 8.566 - 8.615: 96.1711% ( 8) 00:07:17.835 8.615 - 8.665: 96.2772% ( 14) 00:07:17.835 8.665 - 8.714: 96.3530% ( 10) 00:07:17.835 8.714 - 8.763: 96.4364% ( 11) 00:07:17.835 8.763 - 8.812: 96.4668% ( 4) 00:07:17.835 8.812 - 8.862: 96.5198% ( 7) 00:07:17.835 8.862 - 8.911: 96.5577% ( 5) 00:07:17.835 8.911 - 8.960: 96.5805% ( 3) 00:07:17.835 8.960 - 9.009: 96.6184% ( 5) 00:07:17.835 9.009 - 9.058: 96.6411% ( 3) 00:07:17.835 9.058 - 9.108: 96.6639% ( 3) 00:07:17.835 9.108 - 9.157: 96.6942% ( 4) 00:07:17.835 9.157 - 9.206: 96.7018% ( 1) 00:07:17.835 9.206 - 9.255: 96.7094% ( 1) 00:07:17.835 9.255 - 9.305: 96.7170% ( 1) 00:07:17.835 9.305 - 9.354: 96.7321% ( 2) 00:07:17.835 9.403 - 9.452: 96.7397% ( 1) 00:07:17.835 9.502 - 9.551: 96.7549% ( 2) 00:07:17.835 9.600 - 9.649: 96.7700% ( 2) 00:07:17.835 9.649 - 9.698: 96.7776% ( 1) 00:07:17.835 9.698 - 9.748: 96.7852% ( 1) 00:07:17.835 9.748 - 9.797: 96.8079% ( 3) 00:07:17.835 9.797 - 9.846: 96.8155% ( 1) 00:07:17.835 9.895 - 9.945: 96.8231% ( 1) 00:07:17.835 9.945 - 9.994: 96.8383% ( 2) 00:07:17.835 9.994 - 10.043: 96.8610% ( 3) 00:07:17.835 10.043 - 10.092: 96.8838% ( 3) 00:07:17.836 10.092 - 10.142: 96.8989% ( 2) 00:07:17.836 10.191 - 10.240: 96.9065% ( 1) 00:07:17.836 10.240 - 10.289: 96.9217% ( 2) 00:07:17.836 10.289 - 10.338: 96.9444% ( 3) 00:07:17.836 10.338 - 10.388: 96.9596% ( 2) 00:07:17.836 10.388 - 10.437: 96.9672% ( 1) 00:07:17.836 10.437 - 10.486: 96.9748% ( 1) 00:07:17.836 10.535 - 10.585: 96.9899% ( 2) 00:07:17.836 10.585 - 10.634: 96.9975% ( 1) 00:07:17.836 10.634 - 10.683: 97.0051% ( 1) 00:07:17.836 10.732 - 10.782: 97.0202% ( 2) 00:07:17.836 10.782 - 10.831: 97.0278% ( 1) 00:07:17.836 10.831 - 10.880: 97.0430% ( 2) 00:07:17.836 10.880 - 10.929: 97.0809% ( 5) 00:07:17.836 10.978 - 11.028: 97.0961% ( 2) 00:07:17.836 11.028 - 11.077: 97.1188% ( 3) 00:07:17.836 11.077 - 11.126: 97.1416% ( 3) 00:07:17.836 11.126 - 11.175: 97.1795% ( 5) 00:07:17.836 11.175 - 11.225: 97.2022% ( 3) 00:07:17.836 11.225 - 11.274: 97.2250% ( 3) 00:07:17.836 11.274 - 11.323: 97.2401% ( 2) 00:07:17.836 11.323 - 11.372: 97.2932% ( 7) 00:07:17.836 11.372 - 11.422: 97.3159% ( 3) 00:07:17.836 11.422 - 11.471: 97.3614% ( 6) 00:07:17.836 11.471 - 11.520: 97.3993% ( 5) 00:07:17.836 11.520 - 11.569: 97.4069% ( 1) 00:07:17.836 11.569 - 11.618: 97.4221% ( 2) 00:07:17.836 11.618 - 11.668: 97.4600% ( 5) 00:07:17.836 11.815 - 11.865: 97.4676% ( 1) 00:07:17.836 11.865 - 11.914: 97.4828% ( 2) 00:07:17.836 11.914 - 11.963: 97.4903% ( 1) 00:07:17.836 12.012 - 12.062: 97.5055% ( 2) 00:07:17.836 12.062 - 12.111: 97.5131% ( 1) 00:07:17.836 12.160 - 12.209: 97.5207% ( 1) 00:07:17.836 12.308 - 12.357: 97.5358% ( 2) 00:07:17.836 12.357 - 12.406: 97.5434% ( 1) 00:07:17.836 12.455 - 12.505: 97.5510% ( 1) 00:07:17.836 12.554 - 12.603: 97.5737% ( 3) 00:07:17.836 12.702 - 12.800: 97.5889% ( 2) 00:07:17.836 12.898 - 12.997: 97.5965% ( 1) 00:07:17.836 12.997 - 13.095: 97.6116% ( 2) 00:07:17.836 13.095 - 13.194: 97.6192% ( 1) 00:07:17.836 13.194 - 13.292: 97.6268% ( 1) 00:07:17.836 13.292 - 13.391: 97.6420% ( 2) 00:07:17.836 13.391 - 13.489: 97.6950% ( 7) 00:07:17.836 13.489 - 13.588: 97.7330% ( 5) 00:07:17.836 13.588 - 13.686: 97.7785% ( 6) 00:07:17.836 13.686 - 13.785: 97.8391% ( 8) 00:07:17.836 13.785 - 13.883: 97.8998% ( 8) 00:07:17.836 13.883 - 13.982: 97.9832% ( 11) 00:07:17.836 13.982 - 14.080: 98.0817% ( 13) 00:07:17.836 14.080 - 14.178: 98.1424% ( 8) 00:07:17.836 14.178 - 14.277: 98.2030% ( 8) 00:07:17.836 14.277 - 14.375: 98.2561% ( 7) 00:07:17.836 14.375 - 14.474: 98.2865% ( 4) 00:07:17.836 14.474 - 14.572: 98.3774% ( 12) 00:07:17.836 14.572 - 14.671: 98.4457% ( 9) 00:07:17.836 14.671 - 14.769: 98.5291% ( 11) 00:07:17.836 14.769 - 14.868: 98.5594% ( 4) 00:07:17.836 14.868 - 14.966: 98.6276% ( 9) 00:07:17.836 14.966 - 15.065: 98.6580% ( 4) 00:07:17.836 15.065 - 15.163: 98.6807% ( 3) 00:07:17.836 15.163 - 15.262: 98.7110% ( 4) 00:07:17.836 15.262 - 15.360: 98.7565% ( 6) 00:07:17.836 15.360 - 15.458: 98.7793% ( 3) 00:07:17.836 15.458 - 15.557: 98.7944% ( 2) 00:07:17.836 15.557 - 15.655: 98.8020% ( 1) 00:07:17.836 15.655 - 15.754: 98.8172% ( 2) 00:07:17.836 15.754 - 15.852: 98.8248% ( 1) 00:07:17.836 15.852 - 15.951: 98.8399% ( 2) 00:07:17.836 15.951 - 16.049: 98.8703% ( 4) 00:07:17.836 16.049 - 16.148: 98.8779% ( 1) 00:07:17.836 16.148 - 16.246: 98.8930% ( 2) 00:07:17.836 16.345 - 16.443: 98.9309% ( 5) 00:07:17.836 16.443 - 16.542: 98.9385% ( 1) 00:07:17.836 16.542 - 16.640: 98.9461% ( 1) 00:07:17.836 16.837 - 16.935: 98.9537% ( 1) 00:07:17.836 16.935 - 17.034: 98.9688% ( 2) 00:07:17.836 17.132 - 17.231: 98.9764% ( 1) 00:07:17.836 17.231 - 17.329: 98.9916% ( 2) 00:07:17.836 17.329 - 17.428: 98.9992% ( 1) 00:07:17.836 17.625 - 17.723: 99.0067% ( 1) 00:07:17.836 17.723 - 17.822: 99.0219% ( 2) 00:07:17.836 17.920 - 18.018: 99.0447% ( 3) 00:07:17.836 18.018 - 18.117: 99.0522% ( 1) 00:07:17.836 18.117 - 18.215: 99.0598% ( 1) 00:07:17.836 18.215 - 18.314: 99.0674% ( 1) 00:07:17.836 18.314 - 18.412: 99.0750% ( 1) 00:07:17.836 18.412 - 18.511: 99.0826% ( 1) 00:07:17.836 18.511 - 18.609: 99.0902% ( 1) 00:07:17.836 18.905 - 19.003: 99.0977% ( 1) 00:07:17.836 19.298 - 19.397: 99.1129% ( 2) 00:07:17.836 19.495 - 19.594: 99.1205% ( 1) 00:07:17.836 19.889 - 19.988: 99.1281% ( 1) 00:07:17.836 20.185 - 20.283: 99.1356% ( 1) 00:07:17.836 20.677 - 20.775: 99.1432% ( 1) 00:07:17.836 22.252 - 22.351: 99.1660% ( 3) 00:07:17.836 22.351 - 22.449: 99.2266% ( 8) 00:07:17.836 22.449 - 22.548: 99.3555% ( 17) 00:07:17.836 22.548 - 22.646: 99.4768% ( 16) 00:07:17.836 22.646 - 22.745: 99.5906% ( 15) 00:07:17.836 22.745 - 22.843: 99.6891% ( 13) 00:07:17.836 22.843 - 22.942: 99.7346% ( 6) 00:07:17.836 22.942 - 23.040: 99.8180% ( 11) 00:07:17.836 23.040 - 23.138: 99.8635% ( 6) 00:07:17.836 23.138 - 23.237: 99.8787% ( 2) 00:07:17.836 23.237 - 23.335: 99.8863% ( 1) 00:07:17.836 24.517 - 24.615: 99.8939% ( 1) 00:07:17.836 25.009 - 25.108: 99.9014% ( 1) 00:07:17.836 27.372 - 27.569: 99.9090% ( 1) 00:07:17.836 29.932 - 30.129: 99.9166% ( 1) 00:07:17.836 35.840 - 36.037: 99.9242% ( 1) 00:07:17.836 38.203 - 38.400: 99.9318% ( 1) 00:07:17.836 38.400 - 38.597: 99.9393% ( 1) 00:07:17.836 43.717 - 43.914: 99.9469% ( 1) 00:07:17.836 47.262 - 47.458: 99.9545% ( 1) 00:07:17.836 83.495 - 83.889: 99.9621% ( 1) 00:07:17.836 115.791 - 116.578: 99.9697% ( 1) 00:07:17.836 125.243 - 126.031: 99.9773% ( 1) 00:07:17.836 128.394 - 129.182: 99.9848% ( 1) 00:07:17.836 222.129 - 223.705: 99.9924% ( 1) 00:07:17.836 466.314 - 469.465: 100.0000% ( 1) 00:07:17.836 00:07:17.836 00:07:17.836 real 0m1.219s 00:07:17.836 user 0m1.063s 00:07:17.836 sys 0m0.103s 00:07:17.836 17:37:07 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:17.836 17:37:07 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:17.836 ************************************ 00:07:17.836 END TEST nvme_overhead 00:07:17.836 ************************************ 00:07:17.836 17:37:07 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:17.836 17:37:07 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:17.836 17:37:07 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:17.836 17:37:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:17.836 ************************************ 00:07:17.836 START TEST nvme_arbitration 00:07:17.836 ************************************ 00:07:17.836 17:37:07 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:21.132 Initializing NVMe Controllers 00:07:21.132 Attached to 0000:00:10.0 00:07:21.132 Attached to 0000:00:11.0 00:07:21.132 Attached to 0000:00:13.0 00:07:21.132 Attached to 0000:00:12.0 00:07:21.132 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:07:21.132 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:07:21.132 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:07:21.132 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:21.132 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:21.132 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:21.132 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:21.132 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:21.132 Initialization complete. Launching workers. 00:07:21.132 Starting thread on core 1 with urgent priority queue 00:07:21.132 Starting thread on core 2 with urgent priority queue 00:07:21.132 Starting thread on core 3 with urgent priority queue 00:07:21.132 Starting thread on core 0 with urgent priority queue 00:07:21.132 QEMU NVMe Ctrl (12340 ) core 0: 917.33 IO/s 109.01 secs/100000 ios 00:07:21.132 QEMU NVMe Ctrl (12342 ) core 0: 917.33 IO/s 109.01 secs/100000 ios 00:07:21.132 QEMU NVMe Ctrl (12341 ) core 1: 896.00 IO/s 111.61 secs/100000 ios 00:07:21.132 QEMU NVMe Ctrl (12342 ) core 1: 896.00 IO/s 111.61 secs/100000 ios 00:07:21.132 QEMU NVMe Ctrl (12343 ) core 2: 810.67 IO/s 123.36 secs/100000 ios 00:07:21.132 QEMU NVMe Ctrl (12342 ) core 3: 896.00 IO/s 111.61 secs/100000 ios 00:07:21.132 ======================================================== 00:07:21.132 00:07:21.132 00:07:21.132 real 0m3.298s 00:07:21.132 user 0m9.208s 00:07:21.132 sys 0m0.126s 00:07:21.132 17:37:10 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.133 17:37:10 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:21.133 ************************************ 00:07:21.133 END TEST nvme_arbitration 00:07:21.133 ************************************ 00:07:21.133 17:37:10 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:21.133 17:37:10 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:21.133 17:37:10 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.133 17:37:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:21.133 ************************************ 00:07:21.133 START TEST nvme_single_aen 00:07:21.133 ************************************ 00:07:21.133 17:37:10 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:21.412 Asynchronous Event Request test 00:07:21.412 Attached to 0000:00:10.0 00:07:21.412 Attached to 0000:00:11.0 00:07:21.412 Attached to 0000:00:13.0 00:07:21.412 Attached to 0000:00:12.0 00:07:21.412 Reset controller to setup AER completions for this process 00:07:21.412 Registering asynchronous event callbacks... 00:07:21.412 Getting orig temperature thresholds of all controllers 00:07:21.412 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:21.412 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:21.412 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:21.412 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:21.412 Setting all controllers temperature threshold low to trigger AER 00:07:21.412 Waiting for all controllers temperature threshold to be set lower 00:07:21.412 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:21.412 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:21.412 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:21.412 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:21.412 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:21.412 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:21.412 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:21.412 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:21.412 Waiting for all controllers to trigger AER and reset threshold 00:07:21.412 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:21.412 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:21.412 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:21.412 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:21.412 Cleaning up... 00:07:21.412 ************************************ 00:07:21.412 END TEST nvme_single_aen 00:07:21.412 ************************************ 00:07:21.412 00:07:21.412 real 0m0.208s 00:07:21.412 user 0m0.078s 00:07:21.412 sys 0m0.089s 00:07:21.412 17:37:11 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.412 17:37:11 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:21.412 17:37:11 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:21.413 17:37:11 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:21.413 17:37:11 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.413 17:37:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:21.413 ************************************ 00:07:21.413 START TEST nvme_doorbell_aers 00:07:21.413 ************************************ 00:07:21.413 17:37:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:07:21.413 17:37:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:21.413 17:37:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:21.413 17:37:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:21.413 17:37:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:21.413 17:37:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:21.413 17:37:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:07:21.413 17:37:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:21.413 17:37:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:21.413 17:37:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:21.413 17:37:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:07:21.413 17:37:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:21.413 17:37:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:21.413 17:37:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:21.680 [2024-10-13 17:37:11.327075] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63592) is not found. Dropping the request. 00:07:31.657 Executing: test_write_invalid_db 00:07:31.657 Waiting for AER completion... 00:07:31.657 Failure: test_write_invalid_db 00:07:31.657 00:07:31.657 Executing: test_invalid_db_write_overflow_sq 00:07:31.657 Waiting for AER completion... 00:07:31.657 Failure: test_invalid_db_write_overflow_sq 00:07:31.657 00:07:31.657 Executing: test_invalid_db_write_overflow_cq 00:07:31.657 Waiting for AER completion... 00:07:31.657 Failure: test_invalid_db_write_overflow_cq 00:07:31.657 00:07:31.657 17:37:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:31.657 17:37:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:07:31.657 [2024-10-13 17:37:21.359485] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63592) is not found. Dropping the request. 00:07:41.624 Executing: test_write_invalid_db 00:07:41.624 Waiting for AER completion... 00:07:41.624 Failure: test_write_invalid_db 00:07:41.624 00:07:41.624 Executing: test_invalid_db_write_overflow_sq 00:07:41.624 Waiting for AER completion... 00:07:41.624 Failure: test_invalid_db_write_overflow_sq 00:07:41.624 00:07:41.624 Executing: test_invalid_db_write_overflow_cq 00:07:41.624 Waiting for AER completion... 00:07:41.624 Failure: test_invalid_db_write_overflow_cq 00:07:41.624 00:07:41.624 17:37:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:41.624 17:37:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:07:41.624 [2024-10-13 17:37:31.409079] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63592) is not found. Dropping the request. 00:07:51.591 Executing: test_write_invalid_db 00:07:51.591 Waiting for AER completion... 00:07:51.591 Failure: test_write_invalid_db 00:07:51.591 00:07:51.591 Executing: test_invalid_db_write_overflow_sq 00:07:51.591 Waiting for AER completion... 00:07:51.591 Failure: test_invalid_db_write_overflow_sq 00:07:51.591 00:07:51.591 Executing: test_invalid_db_write_overflow_cq 00:07:51.591 Waiting for AER completion... 00:07:51.591 Failure: test_invalid_db_write_overflow_cq 00:07:51.591 00:07:51.591 17:37:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:51.591 17:37:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:07:51.849 [2024-10-13 17:37:41.449524] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63592) is not found. Dropping the request. 00:08:01.812 Executing: test_write_invalid_db 00:08:01.812 Waiting for AER completion... 00:08:01.812 Failure: test_write_invalid_db 00:08:01.812 00:08:01.812 Executing: test_invalid_db_write_overflow_sq 00:08:01.813 Waiting for AER completion... 00:08:01.813 Failure: test_invalid_db_write_overflow_sq 00:08:01.813 00:08:01.813 Executing: test_invalid_db_write_overflow_cq 00:08:01.813 Waiting for AER completion... 00:08:01.813 Failure: test_invalid_db_write_overflow_cq 00:08:01.813 00:08:01.813 ************************************ 00:08:01.813 END TEST nvme_doorbell_aers 00:08:01.813 ************************************ 00:08:01.813 00:08:01.813 real 0m40.200s 00:08:01.813 user 0m34.150s 00:08:01.813 sys 0m5.658s 00:08:01.813 17:37:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.813 17:37:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:01.813 17:37:51 nvme -- nvme/nvme.sh@97 -- # uname 00:08:01.813 17:37:51 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:01.813 17:37:51 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:01.813 17:37:51 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:01.813 17:37:51 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.813 17:37:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:01.813 ************************************ 00:08:01.813 START TEST nvme_multi_aen 00:08:01.813 ************************************ 00:08:01.813 17:37:51 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:01.813 [2024-10-13 17:37:51.509862] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63592) is not found. Dropping the request. 00:08:01.813 [2024-10-13 17:37:51.509921] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63592) is not found. Dropping the request. 00:08:01.813 [2024-10-13 17:37:51.509932] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63592) is not found. Dropping the request. 00:08:01.813 [2024-10-13 17:37:51.511096] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63592) is not found. Dropping the request. 00:08:01.813 [2024-10-13 17:37:51.511118] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63592) is not found. Dropping the request. 00:08:01.813 [2024-10-13 17:37:51.511125] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63592) is not found. Dropping the request. 00:08:01.813 [2024-10-13 17:37:51.512061] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63592) is not found. Dropping the request. 00:08:01.813 [2024-10-13 17:37:51.512083] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63592) is not found. Dropping the request. 00:08:01.813 [2024-10-13 17:37:51.512090] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63592) is not found. Dropping the request. 00:08:01.813 [2024-10-13 17:37:51.513023] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63592) is not found. Dropping the request. 00:08:01.813 [2024-10-13 17:37:51.513114] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63592) is not found. Dropping the request. 00:08:01.813 [2024-10-13 17:37:51.513174] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63592) is not found. Dropping the request. 00:08:01.813 Child process pid: 64118 00:08:02.071 [Child] Asynchronous Event Request test 00:08:02.071 [Child] Attached to 0000:00:10.0 00:08:02.071 [Child] Attached to 0000:00:11.0 00:08:02.071 [Child] Attached to 0000:00:13.0 00:08:02.071 [Child] Attached to 0000:00:12.0 00:08:02.071 [Child] Registering asynchronous event callbacks... 00:08:02.071 [Child] Getting orig temperature thresholds of all controllers 00:08:02.071 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:02.071 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:02.071 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:02.071 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:02.071 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:02.071 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:02.071 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:02.071 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:02.071 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:02.071 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:02.071 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:02.071 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:02.071 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:02.071 [Child] Cleaning up... 00:08:02.071 Asynchronous Event Request test 00:08:02.071 Attached to 0000:00:10.0 00:08:02.071 Attached to 0000:00:11.0 00:08:02.071 Attached to 0000:00:13.0 00:08:02.071 Attached to 0000:00:12.0 00:08:02.071 Reset controller to setup AER completions for this process 00:08:02.071 Registering asynchronous event callbacks... 00:08:02.071 Getting orig temperature thresholds of all controllers 00:08:02.071 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:02.071 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:02.071 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:02.071 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:02.071 Setting all controllers temperature threshold low to trigger AER 00:08:02.071 Waiting for all controllers temperature threshold to be set lower 00:08:02.071 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:02.071 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:02.071 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:02.071 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:02.071 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:02.071 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:02.071 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:02.071 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:02.071 Waiting for all controllers to trigger AER and reset threshold 00:08:02.071 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:02.071 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:02.071 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:02.071 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:02.071 Cleaning up... 00:08:02.071 00:08:02.071 real 0m0.446s 00:08:02.071 user 0m0.132s 00:08:02.071 sys 0m0.199s 00:08:02.071 17:37:51 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.071 17:37:51 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:02.071 ************************************ 00:08:02.071 END TEST nvme_multi_aen 00:08:02.071 ************************************ 00:08:02.071 17:37:51 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:02.071 17:37:51 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:02.071 17:37:51 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.071 17:37:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:02.071 ************************************ 00:08:02.071 START TEST nvme_startup 00:08:02.071 ************************************ 00:08:02.071 17:37:51 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:02.329 Initializing NVMe Controllers 00:08:02.329 Attached to 0000:00:10.0 00:08:02.329 Attached to 0000:00:11.0 00:08:02.329 Attached to 0000:00:13.0 00:08:02.330 Attached to 0000:00:12.0 00:08:02.330 Initialization complete. 00:08:02.330 Time used:165147.016 (us). 00:08:02.330 00:08:02.330 real 0m0.235s 00:08:02.330 user 0m0.060s 00:08:02.330 sys 0m0.107s 00:08:02.330 17:37:52 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.330 17:37:52 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:02.330 ************************************ 00:08:02.330 END TEST nvme_startup 00:08:02.330 ************************************ 00:08:02.330 17:37:52 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:02.330 17:37:52 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:02.330 17:37:52 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.330 17:37:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:02.330 ************************************ 00:08:02.330 START TEST nvme_multi_secondary 00:08:02.330 ************************************ 00:08:02.330 17:37:52 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:08:02.330 17:37:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=64174 00:08:02.330 17:37:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:02.330 17:37:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=64175 00:08:02.330 17:37:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:02.330 17:37:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:05.618 Initializing NVMe Controllers 00:08:05.618 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:05.618 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:05.618 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:05.618 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:05.618 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:05.618 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:05.618 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:05.618 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:05.618 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:05.618 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:05.618 Initialization complete. Launching workers. 00:08:05.618 ======================================================== 00:08:05.618 Latency(us) 00:08:05.618 Device Information : IOPS MiB/s Average min max 00:08:05.618 PCIE (0000:00:10.0) NSID 1 from core 2: 1267.58 4.95 12620.99 939.93 28436.73 00:08:05.618 PCIE (0000:00:11.0) NSID 1 from core 2: 1272.90 4.97 12569.09 987.00 28361.24 00:08:05.618 PCIE (0000:00:13.0) NSID 1 from core 2: 1272.90 4.97 12571.93 972.87 33810.15 00:08:05.618 PCIE (0000:00:12.0) NSID 1 from core 2: 1272.90 4.97 12571.81 964.41 29264.98 00:08:05.618 PCIE (0000:00:12.0) NSID 2 from core 2: 1272.90 4.97 12576.50 964.99 28634.72 00:08:05.618 PCIE (0000:00:12.0) NSID 3 from core 2: 1272.90 4.97 12578.90 971.57 28461.50 00:08:05.618 ======================================================== 00:08:05.618 Total : 7632.10 29.81 12581.51 939.93 33810.15 00:08:05.618 00:08:05.879 17:37:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 64174 00:08:05.879 Initializing NVMe Controllers 00:08:05.879 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:05.879 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:05.879 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:05.879 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:05.879 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:05.879 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:05.879 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:05.879 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:05.879 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:05.879 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:05.879 Initialization complete. Launching workers. 00:08:05.879 ======================================================== 00:08:05.879 Latency(us) 00:08:05.879 Device Information : IOPS MiB/s Average min max 00:08:05.879 PCIE (0000:00:10.0) NSID 1 from core 1: 2743.56 10.72 5829.79 1293.72 11721.77 00:08:05.879 PCIE (0000:00:11.0) NSID 1 from core 1: 2743.56 10.72 5832.33 1329.44 11688.78 00:08:05.879 PCIE (0000:00:13.0) NSID 1 from core 1: 2743.56 10.72 5832.40 1318.98 11450.81 00:08:05.879 PCIE (0000:00:12.0) NSID 1 from core 1: 2743.56 10.72 5832.37 1385.59 12359.91 00:08:05.879 PCIE (0000:00:12.0) NSID 2 from core 1: 2743.56 10.72 5833.17 1371.99 11106.94 00:08:05.879 PCIE (0000:00:12.0) NSID 3 from core 1: 2748.89 10.74 5823.76 1447.15 11953.45 00:08:05.879 ======================================================== 00:08:05.879 Total : 16466.71 64.32 5830.64 1293.72 12359.91 00:08:05.879 00:08:07.851 Initializing NVMe Controllers 00:08:07.851 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:07.851 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:07.851 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:07.851 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:07.851 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:07.851 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:07.851 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:07.851 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:07.851 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:07.851 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:07.851 Initialization complete. Launching workers. 00:08:07.851 ======================================================== 00:08:07.851 Latency(us) 00:08:07.851 Device Information : IOPS MiB/s Average min max 00:08:07.851 PCIE (0000:00:10.0) NSID 1 from core 0: 4210.10 16.45 3798.86 785.30 31762.37 00:08:07.852 PCIE (0000:00:11.0) NSID 1 from core 0: 4210.10 16.45 3800.10 804.33 32163.65 00:08:07.852 PCIE (0000:00:13.0) NSID 1 from core 0: 4210.10 16.45 3800.18 802.26 32154.01 00:08:07.852 PCIE (0000:00:12.0) NSID 1 from core 0: 4210.10 16.45 3800.26 797.93 32442.38 00:08:07.852 PCIE (0000:00:12.0) NSID 2 from core 0: 4210.10 16.45 3800.32 804.25 30927.32 00:08:07.852 PCIE (0000:00:12.0) NSID 3 from core 0: 4210.10 16.45 3800.39 794.84 31709.94 00:08:07.852 ======================================================== 00:08:07.852 Total : 25260.58 98.67 3800.02 785.30 32442.38 00:08:07.852 00:08:07.852 17:37:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 64175 00:08:07.852 17:37:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=64245 00:08:07.852 17:37:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:07.852 17:37:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:07.852 17:37:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=64246 00:08:07.852 17:37:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:11.133 Initializing NVMe Controllers 00:08:11.133 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:11.133 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:11.133 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:11.133 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:11.133 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:11.133 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:11.133 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:11.133 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:11.133 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:11.133 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:11.133 Initialization complete. Launching workers. 00:08:11.133 ======================================================== 00:08:11.133 Latency(us) 00:08:11.133 Device Information : IOPS MiB/s Average min max 00:08:11.133 PCIE (0000:00:10.0) NSID 1 from core 1: 2432.35 9.50 6575.94 1022.10 13780.23 00:08:11.133 PCIE (0000:00:11.0) NSID 1 from core 1: 2432.35 9.50 6581.25 1041.83 13826.43 00:08:11.133 PCIE (0000:00:13.0) NSID 1 from core 1: 2432.35 9.50 6581.30 1047.39 13308.39 00:08:11.133 PCIE (0000:00:12.0) NSID 1 from core 1: 2432.35 9.50 6581.28 1038.16 12998.26 00:08:11.133 PCIE (0000:00:12.0) NSID 2 from core 1: 2432.35 9.50 6582.77 1026.71 13083.81 00:08:11.133 PCIE (0000:00:12.0) NSID 3 from core 1: 2437.67 9.52 6568.34 1035.14 13404.89 00:08:11.133 ======================================================== 00:08:11.133 Total : 14599.41 57.03 6578.48 1022.10 13826.43 00:08:11.133 00:08:11.133 Initializing NVMe Controllers 00:08:11.133 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:11.133 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:11.133 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:11.133 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:11.133 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:11.133 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:11.133 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:11.133 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:11.133 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:11.133 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:11.133 Initialization complete. Launching workers. 00:08:11.133 ======================================================== 00:08:11.133 Latency(us) 00:08:11.133 Device Information : IOPS MiB/s Average min max 00:08:11.133 PCIE (0000:00:10.0) NSID 1 from core 0: 2440.77 9.53 6553.26 1515.93 17136.01 00:08:11.133 PCIE (0000:00:11.0) NSID 1 from core 0: 2440.77 9.53 6557.38 1631.14 16395.28 00:08:11.133 PCIE (0000:00:13.0) NSID 1 from core 0: 2440.77 9.53 6559.37 1640.47 16850.86 00:08:11.133 PCIE (0000:00:12.0) NSID 1 from core 0: 2440.77 9.53 6560.51 1656.45 16927.76 00:08:11.133 PCIE (0000:00:12.0) NSID 2 from core 0: 2440.77 9.53 6560.77 1627.84 17008.41 00:08:11.133 PCIE (0000:00:12.0) NSID 3 from core 0: 2440.77 9.53 6560.83 1627.63 16594.87 00:08:11.133 ======================================================== 00:08:11.133 Total : 14644.62 57.21 6558.69 1515.93 17136.01 00:08:11.134 00:08:13.033 Initializing NVMe Controllers 00:08:13.033 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:13.033 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:13.033 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:13.033 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:13.033 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:13.033 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:13.033 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:13.033 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:13.033 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:13.033 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:13.033 Initialization complete. Launching workers. 00:08:13.033 ======================================================== 00:08:13.033 Latency(us) 00:08:13.033 Device Information : IOPS MiB/s Average min max 00:08:13.033 PCIE (0000:00:10.0) NSID 1 from core 2: 2160.62 8.44 7404.05 1345.72 29548.12 00:08:13.033 PCIE (0000:00:11.0) NSID 1 from core 2: 2161.82 8.44 7400.37 1226.81 29547.24 00:08:13.033 PCIE (0000:00:13.0) NSID 1 from core 2: 2163.81 8.45 7393.81 1341.01 31698.83 00:08:13.033 PCIE (0000:00:12.0) NSID 1 from core 2: 2163.81 8.45 7393.70 1357.54 28117.17 00:08:13.033 PCIE (0000:00:12.0) NSID 2 from core 2: 2163.81 8.45 7393.23 1377.60 30630.11 00:08:13.033 PCIE (0000:00:12.0) NSID 3 from core 2: 2163.81 8.45 7393.47 1286.78 30632.57 00:08:13.033 ======================================================== 00:08:13.033 Total : 12977.69 50.69 7396.44 1226.81 31698.83 00:08:13.033 00:08:13.292 ************************************ 00:08:13.292 END TEST nvme_multi_secondary 00:08:13.292 ************************************ 00:08:13.292 17:38:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 64245 00:08:13.292 17:38:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 64246 00:08:13.292 00:08:13.292 real 0m10.777s 00:08:13.292 user 0m18.252s 00:08:13.292 sys 0m0.742s 00:08:13.292 17:38:02 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:13.292 17:38:02 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:13.292 17:38:02 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:13.292 17:38:02 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:13.292 17:38:02 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/63189 ]] 00:08:13.292 17:38:02 nvme -- common/autotest_common.sh@1090 -- # kill 63189 00:08:13.292 17:38:02 nvme -- common/autotest_common.sh@1091 -- # wait 63189 00:08:13.292 [2024-10-13 17:38:02.940316] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64117) is not found. Dropping the request. 00:08:13.292 [2024-10-13 17:38:02.940405] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64117) is not found. Dropping the request. 00:08:13.292 [2024-10-13 17:38:02.940447] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64117) is not found. Dropping the request. 00:08:13.292 [2024-10-13 17:38:02.940474] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64117) is not found. Dropping the request. 00:08:13.292 [2024-10-13 17:38:02.944160] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64117) is not found. Dropping the request. 00:08:13.292 [2024-10-13 17:38:02.944246] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64117) is not found. Dropping the request. 00:08:13.292 [2024-10-13 17:38:02.944276] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64117) is not found. Dropping the request. 00:08:13.292 [2024-10-13 17:38:02.944307] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64117) is not found. Dropping the request. 00:08:13.292 [2024-10-13 17:38:02.947646] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64117) is not found. Dropping the request. 00:08:13.292 [2024-10-13 17:38:02.947730] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64117) is not found. Dropping the request. 00:08:13.292 [2024-10-13 17:38:02.947760] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64117) is not found. Dropping the request. 00:08:13.292 [2024-10-13 17:38:02.947791] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64117) is not found. Dropping the request. 00:08:13.292 [2024-10-13 17:38:02.951366] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64117) is not found. Dropping the request. 00:08:13.292 [2024-10-13 17:38:02.951633] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64117) is not found. Dropping the request. 00:08:13.292 [2024-10-13 17:38:02.951672] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64117) is not found. Dropping the request. 00:08:13.292 [2024-10-13 17:38:02.951702] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64117) is not found. Dropping the request. 00:08:13.292 [2024-10-13 17:38:03.070051] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:08:13.292 17:38:03 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:08:13.292 17:38:03 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:08:13.292 17:38:03 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:13.292 17:38:03 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:13.292 17:38:03 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:13.292 17:38:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:13.292 ************************************ 00:08:13.292 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:13.292 ************************************ 00:08:13.292 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:13.554 * Looking for test storage... 00:08:13.554 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lcov --version 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:13.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.554 --rc genhtml_branch_coverage=1 00:08:13.554 --rc genhtml_function_coverage=1 00:08:13.554 --rc genhtml_legend=1 00:08:13.554 --rc geninfo_all_blocks=1 00:08:13.554 --rc geninfo_unexecuted_blocks=1 00:08:13.554 00:08:13.554 ' 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:13.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.554 --rc genhtml_branch_coverage=1 00:08:13.554 --rc genhtml_function_coverage=1 00:08:13.554 --rc genhtml_legend=1 00:08:13.554 --rc geninfo_all_blocks=1 00:08:13.554 --rc geninfo_unexecuted_blocks=1 00:08:13.554 00:08:13.554 ' 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:13.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.554 --rc genhtml_branch_coverage=1 00:08:13.554 --rc genhtml_function_coverage=1 00:08:13.554 --rc genhtml_legend=1 00:08:13.554 --rc geninfo_all_blocks=1 00:08:13.554 --rc geninfo_unexecuted_blocks=1 00:08:13.554 00:08:13.554 ' 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:13.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.554 --rc genhtml_branch_coverage=1 00:08:13.554 --rc genhtml_function_coverage=1 00:08:13.554 --rc genhtml_legend=1 00:08:13.554 --rc geninfo_all_blocks=1 00:08:13.554 --rc geninfo_unexecuted_blocks=1 00:08:13.554 00:08:13.554 ' 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:08:13.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64408 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64408 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 64408 ']' 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:13.554 17:38:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:13.813 [2024-10-13 17:38:03.394933] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:08:13.813 [2024-10-13 17:38:03.395718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64408 ] 00:08:13.813 [2024-10-13 17:38:03.565461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:14.073 [2024-10-13 17:38:03.694599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.073 [2024-10-13 17:38:03.694782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:14.073 [2024-10-13 17:38:03.696003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:14.073 [2024-10-13 17:38:03.696095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.057 17:38:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:15.057 17:38:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:08:15.057 17:38:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:15.057 17:38:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.057 17:38:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:15.057 nvme0n1 00:08:15.057 17:38:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.057 17:38:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:15.057 17:38:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_sVhka.txt 00:08:15.057 17:38:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:15.057 17:38:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.057 17:38:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:15.057 true 00:08:15.057 17:38:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.057 17:38:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:15.057 17:38:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1728841084 00:08:15.057 17:38:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64431 00:08:15.057 17:38:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:15.057 17:38:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:15.057 17:38:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:16.995 [2024-10-13 17:38:06.580084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:16.995 [2024-10-13 17:38:06.580789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:16.995 [2024-10-13 17:38:06.581168] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:16.995 [2024-10-13 17:38:06.581488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:16.995 [2024-10-13 17:38:06.585182] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:16.995 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64431 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64431 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64431 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_sVhka.txt 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_sVhka.txt 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64408 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 64408 ']' 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 64408 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64408 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64408' 00:08:16.995 killing process with pid 64408 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 64408 00:08:16.995 17:38:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 64408 00:08:18.915 17:38:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:18.915 17:38:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:18.915 00:08:18.915 real 0m5.210s 00:08:18.915 user 0m18.247s 00:08:18.915 sys 0m0.687s 00:08:18.915 17:38:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:18.915 17:38:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:18.915 ************************************ 00:08:18.915 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:18.915 ************************************ 00:08:18.915 17:38:08 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:18.915 17:38:08 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:18.915 17:38:08 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:18.915 17:38:08 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:18.915 17:38:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:18.915 ************************************ 00:08:18.915 START TEST nvme_fio 00:08:18.915 ************************************ 00:08:18.915 17:38:08 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:08:18.915 17:38:08 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:18.915 17:38:08 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:18.915 17:38:08 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:18.915 17:38:08 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:18.915 17:38:08 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:08:18.915 17:38:08 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:18.915 17:38:08 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:18.915 17:38:08 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:18.915 17:38:08 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:18.915 17:38:08 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:18.915 17:38:08 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:18.915 17:38:08 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:18.915 17:38:08 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:18.915 17:38:08 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:18.915 17:38:08 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:18.915 17:38:08 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:18.915 17:38:08 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:19.176 17:38:08 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:19.176 17:38:08 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:19.176 17:38:08 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:19.176 17:38:08 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:19.176 17:38:08 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:19.176 17:38:08 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:19.176 17:38:08 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:19.176 17:38:08 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:08:19.176 17:38:08 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:19.176 17:38:08 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:19.176 17:38:08 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:19.176 17:38:08 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:08:19.176 17:38:08 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:19.176 17:38:08 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:19.176 17:38:08 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:19.176 17:38:08 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:08:19.176 17:38:08 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:19.176 17:38:08 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:19.437 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:19.437 fio-3.35 00:08:19.437 Starting 1 thread 00:08:24.720 00:08:24.720 test: (groupid=0, jobs=1): err= 0: pid=64570: Sun Oct 13 17:38:14 2024 00:08:24.720 read: IOPS=17.5k, BW=68.2MiB/s (71.5MB/s)(137MiB/2001msec) 00:08:24.720 slat (nsec): min=3680, max=94877, avg=5939.35, stdev=3032.42 00:08:24.720 clat (usec): min=281, max=10524, avg=3629.63, stdev=1147.49 00:08:24.720 lat (usec): min=286, max=10568, avg=3635.57, stdev=1148.76 00:08:24.720 clat percentiles (usec): 00:08:24.720 | 1.00th=[ 2278], 5.00th=[ 2540], 10.00th=[ 2671], 20.00th=[ 2835], 00:08:24.720 | 30.00th=[ 2966], 40.00th=[ 3097], 50.00th=[ 3228], 60.00th=[ 3392], 00:08:24.720 | 70.00th=[ 3687], 80.00th=[ 4293], 90.00th=[ 5407], 95.00th=[ 6194], 00:08:24.720 | 99.00th=[ 7504], 99.50th=[ 7832], 99.90th=[ 8848], 99.95th=[ 9634], 00:08:24.720 | 99.99th=[10421] 00:08:24.720 bw ( KiB/s): min=67992, max=71512, per=100.00%, avg=70250.67, stdev=1960.51, samples=3 00:08:24.720 iops : min=16998, max=17878, avg=17562.67, stdev=490.13, samples=3 00:08:24.720 write: IOPS=17.5k, BW=68.3MiB/s (71.6MB/s)(137MiB/2001msec); 0 zone resets 00:08:24.720 slat (usec): min=3, max=104, avg= 6.25, stdev= 3.10 00:08:24.720 clat (usec): min=327, max=10452, avg=3666.42, stdev=1149.79 00:08:24.720 lat (usec): min=332, max=10470, avg=3672.67, stdev=1151.05 00:08:24.720 clat percentiles (usec): 00:08:24.720 | 1.00th=[ 2311], 5.00th=[ 2573], 10.00th=[ 2671], 20.00th=[ 2868], 00:08:24.720 | 30.00th=[ 2999], 40.00th=[ 3130], 50.00th=[ 3261], 60.00th=[ 3458], 00:08:24.720 | 70.00th=[ 3720], 80.00th=[ 4359], 90.00th=[ 5407], 95.00th=[ 6259], 00:08:24.720 | 99.00th=[ 7439], 99.50th=[ 7767], 99.90th=[ 8848], 99.95th=[ 9765], 00:08:24.720 | 99.99th=[10290] 00:08:24.720 bw ( KiB/s): min=68200, max=71352, per=100.00%, avg=70253.33, stdev=1779.70, samples=3 00:08:24.720 iops : min=17050, max=17838, avg=17563.33, stdev=444.92, samples=3 00:08:24.720 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:08:24.720 lat (msec) : 2=0.16%, 4=75.38%, 10=24.39%, 20=0.03% 00:08:24.720 cpu : usr=98.80%, sys=0.10%, ctx=3, majf=0, minf=607 00:08:24.720 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:24.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:24.720 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:24.721 issued rwts: total=34953,34985,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:24.721 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:24.721 00:08:24.721 Run status group 0 (all jobs): 00:08:24.721 READ: bw=68.2MiB/s (71.5MB/s), 68.2MiB/s-68.2MiB/s (71.5MB/s-71.5MB/s), io=137MiB (143MB), run=2001-2001msec 00:08:24.721 WRITE: bw=68.3MiB/s (71.6MB/s), 68.3MiB/s-68.3MiB/s (71.6MB/s-71.6MB/s), io=137MiB (143MB), run=2001-2001msec 00:08:24.721 ----------------------------------------------------- 00:08:24.721 Suppressions used: 00:08:24.721 count bytes template 00:08:24.721 1 32 /usr/src/fio/parse.c 00:08:24.721 1 8 libtcmalloc_minimal.so 00:08:24.721 ----------------------------------------------------- 00:08:24.721 00:08:24.721 17:38:14 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:24.721 17:38:14 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:24.721 17:38:14 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:24.721 17:38:14 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:24.721 17:38:14 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:24.721 17:38:14 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:24.981 17:38:14 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:24.981 17:38:14 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:24.981 17:38:14 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:24.981 17:38:14 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:24.981 17:38:14 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:24.981 17:38:14 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:24.981 17:38:14 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:24.981 17:38:14 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:08:24.981 17:38:14 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:24.981 17:38:14 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:24.981 17:38:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:24.981 17:38:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:08:24.981 17:38:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:24.981 17:38:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:24.981 17:38:14 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:24.981 17:38:14 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:08:24.981 17:38:14 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:24.981 17:38:14 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:25.242 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:25.242 fio-3.35 00:08:25.242 Starting 1 thread 00:08:31.826 00:08:31.826 test: (groupid=0, jobs=1): err= 0: pid=64626: Sun Oct 13 17:38:20 2024 00:08:31.826 read: IOPS=18.9k, BW=73.9MiB/s (77.5MB/s)(148MiB/2001msec) 00:08:31.826 slat (nsec): min=3358, max=83567, avg=5485.86, stdev=2803.72 00:08:31.826 clat (usec): min=270, max=10533, avg=3349.96, stdev=1133.89 00:08:31.826 lat (usec): min=276, max=10577, avg=3355.44, stdev=1135.13 00:08:31.826 clat percentiles (usec): 00:08:31.826 | 1.00th=[ 2073], 5.00th=[ 2311], 10.00th=[ 2409], 20.00th=[ 2573], 00:08:31.826 | 30.00th=[ 2671], 40.00th=[ 2802], 50.00th=[ 2933], 60.00th=[ 3130], 00:08:31.826 | 70.00th=[ 3392], 80.00th=[ 4015], 90.00th=[ 5080], 95.00th=[ 5866], 00:08:31.826 | 99.00th=[ 7111], 99.50th=[ 7635], 99.90th=[ 8717], 99.95th=[ 9503], 00:08:31.826 | 99.99th=[10421] 00:08:31.826 bw ( KiB/s): min=67576, max=80912, per=99.39%, avg=75216.00, stdev=6877.25, samples=3 00:08:31.826 iops : min=16894, max=20228, avg=18804.00, stdev=1719.31, samples=3 00:08:31.826 write: IOPS=18.9k, BW=73.9MiB/s (77.5MB/s)(148MiB/2001msec); 0 zone resets 00:08:31.826 slat (nsec): min=3509, max=73835, avg=5748.20, stdev=2796.55 00:08:31.826 clat (usec): min=302, max=10463, avg=3388.76, stdev=1150.91 00:08:31.826 lat (usec): min=308, max=10476, avg=3394.51, stdev=1152.14 00:08:31.826 clat percentiles (usec): 00:08:31.826 | 1.00th=[ 2089], 5.00th=[ 2311], 10.00th=[ 2442], 20.00th=[ 2606], 00:08:31.826 | 30.00th=[ 2704], 40.00th=[ 2835], 50.00th=[ 2966], 60.00th=[ 3163], 00:08:31.826 | 70.00th=[ 3425], 80.00th=[ 4080], 90.00th=[ 5145], 95.00th=[ 5997], 00:08:31.826 | 99.00th=[ 7111], 99.50th=[ 7701], 99.90th=[ 8848], 99.95th=[ 9634], 00:08:31.826 | 99.99th=[10290] 00:08:31.826 bw ( KiB/s): min=67704, max=80872, per=99.39%, avg=75234.67, stdev=6785.10, samples=3 00:08:31.826 iops : min=16926, max=20218, avg=18808.67, stdev=1696.28, samples=3 00:08:31.826 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:08:31.826 lat (msec) : 2=0.55%, 4=78.93%, 10=20.45%, 20=0.02% 00:08:31.826 cpu : usr=98.75%, sys=0.25%, ctx=3, majf=0, minf=607 00:08:31.826 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:31.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:31.826 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:31.826 issued rwts: total=37856,37868,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:31.826 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:31.826 00:08:31.826 Run status group 0 (all jobs): 00:08:31.826 READ: bw=73.9MiB/s (77.5MB/s), 73.9MiB/s-73.9MiB/s (77.5MB/s-77.5MB/s), io=148MiB (155MB), run=2001-2001msec 00:08:31.826 WRITE: bw=73.9MiB/s (77.5MB/s), 73.9MiB/s-73.9MiB/s (77.5MB/s-77.5MB/s), io=148MiB (155MB), run=2001-2001msec 00:08:31.826 ----------------------------------------------------- 00:08:31.826 Suppressions used: 00:08:31.826 count bytes template 00:08:31.826 1 32 /usr/src/fio/parse.c 00:08:31.826 1 8 libtcmalloc_minimal.so 00:08:31.826 ----------------------------------------------------- 00:08:31.826 00:08:31.826 17:38:20 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:31.826 17:38:20 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:31.826 17:38:20 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:31.826 17:38:20 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:31.826 17:38:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:31.826 17:38:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:31.826 17:38:21 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:31.826 17:38:21 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:31.826 17:38:21 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:31.826 17:38:21 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:31.826 17:38:21 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:31.826 17:38:21 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:31.826 17:38:21 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:31.826 17:38:21 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:08:31.826 17:38:21 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:31.826 17:38:21 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:31.826 17:38:21 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:08:31.826 17:38:21 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:31.826 17:38:21 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:31.826 17:38:21 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:31.826 17:38:21 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:31.826 17:38:21 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:08:31.826 17:38:21 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:31.826 17:38:21 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:31.827 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:31.827 fio-3.35 00:08:31.827 Starting 1 thread 00:08:37.115 00:08:37.115 test: (groupid=0, jobs=1): err= 0: pid=64692: Sun Oct 13 17:38:26 2024 00:08:37.115 read: IOPS=15.7k, BW=61.4MiB/s (64.3MB/s)(123MiB/2001msec) 00:08:37.115 slat (usec): min=3, max=489, avg= 7.34, stdev= 5.12 00:08:37.115 clat (usec): min=321, max=11762, avg=4045.57, stdev=1565.03 00:08:37.115 lat (usec): min=328, max=11768, avg=4052.90, stdev=1566.87 00:08:37.115 clat percentiles (usec): 00:08:37.115 | 1.00th=[ 2278], 5.00th=[ 2540], 10.00th=[ 2671], 20.00th=[ 2868], 00:08:37.115 | 30.00th=[ 3064], 40.00th=[ 3228], 50.00th=[ 3425], 60.00th=[ 3687], 00:08:37.115 | 70.00th=[ 4228], 80.00th=[ 5407], 90.00th=[ 6521], 95.00th=[ 7308], 00:08:37.115 | 99.00th=[ 8717], 99.50th=[ 9372], 99.90th=[10683], 99.95th=[11207], 00:08:37.115 | 99.99th=[11731] 00:08:37.115 bw ( KiB/s): min=52672, max=78403, per=100.00%, avg=63147.67, stdev=13514.99, samples=3 00:08:37.115 iops : min=13168, max=19600, avg=15786.67, stdev=3378.32, samples=3 00:08:37.115 write: IOPS=15.7k, BW=61.4MiB/s (64.4MB/s)(123MiB/2001msec); 0 zone resets 00:08:37.115 slat (usec): min=4, max=573, avg= 7.64, stdev= 4.93 00:08:37.115 clat (usec): min=238, max=12057, avg=4062.58, stdev=1567.02 00:08:37.115 lat (usec): min=245, max=12065, avg=4070.22, stdev=1568.81 00:08:37.115 clat percentiles (usec): 00:08:37.115 | 1.00th=[ 2311], 5.00th=[ 2573], 10.00th=[ 2704], 20.00th=[ 2900], 00:08:37.115 | 30.00th=[ 3064], 40.00th=[ 3261], 50.00th=[ 3425], 60.00th=[ 3687], 00:08:37.115 | 70.00th=[ 4228], 80.00th=[ 5407], 90.00th=[ 6587], 95.00th=[ 7373], 00:08:37.115 | 99.00th=[ 8717], 99.50th=[ 9372], 99.90th=[11207], 99.95th=[11600], 00:08:37.115 | 99.99th=[11731] 00:08:37.115 bw ( KiB/s): min=51736, max=78267, per=99.91%, avg=62846.33, stdev=13780.70, samples=3 00:08:37.115 iops : min=12934, max=19566, avg=15711.33, stdev=3444.76, samples=3 00:08:37.115 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.03% 00:08:37.115 lat (msec) : 2=0.13%, 4=67.30%, 10=32.28%, 20=0.23% 00:08:37.115 cpu : usr=97.70%, sys=0.60%, ctx=23, majf=0, minf=607 00:08:37.115 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:37.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:37.115 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:37.115 issued rwts: total=31435,31466,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:37.115 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:37.115 00:08:37.115 Run status group 0 (all jobs): 00:08:37.115 READ: bw=61.4MiB/s (64.3MB/s), 61.4MiB/s-61.4MiB/s (64.3MB/s-64.3MB/s), io=123MiB (129MB), run=2001-2001msec 00:08:37.115 WRITE: bw=61.4MiB/s (64.4MB/s), 61.4MiB/s-61.4MiB/s (64.4MB/s-64.4MB/s), io=123MiB (129MB), run=2001-2001msec 00:08:37.115 ----------------------------------------------------- 00:08:37.115 Suppressions used: 00:08:37.115 count bytes template 00:08:37.115 1 32 /usr/src/fio/parse.c 00:08:37.115 1 8 libtcmalloc_minimal.so 00:08:37.115 ----------------------------------------------------- 00:08:37.115 00:08:37.115 17:38:26 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:37.115 17:38:26 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:37.115 17:38:26 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:37.115 17:38:26 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:37.376 17:38:27 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:37.376 17:38:27 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:37.637 17:38:27 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:37.637 17:38:27 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:37.637 17:38:27 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:37.637 17:38:27 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:37.637 17:38:27 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:37.637 17:38:27 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:37.637 17:38:27 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:37.637 17:38:27 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:08:37.637 17:38:27 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:37.637 17:38:27 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:37.637 17:38:27 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:37.637 17:38:27 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:37.637 17:38:27 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:08:37.637 17:38:27 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:37.637 17:38:27 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:37.637 17:38:27 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:08:37.637 17:38:27 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:37.637 17:38:27 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:37.898 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:37.898 fio-3.35 00:08:37.898 Starting 1 thread 00:08:46.032 00:08:46.032 test: (groupid=0, jobs=1): err= 0: pid=64754: Sun Oct 13 17:38:35 2024 00:08:46.032 read: IOPS=15.7k, BW=61.4MiB/s (64.4MB/s)(123MiB/2001msec) 00:08:46.032 slat (nsec): min=3946, max=64266, avg=6293.59, stdev=3494.94 00:08:46.032 clat (usec): min=659, max=11341, avg=4035.76, stdev=1360.67 00:08:46.032 lat (usec): min=665, max=11399, avg=4042.06, stdev=1361.94 00:08:46.032 clat percentiles (usec): 00:08:46.032 | 1.00th=[ 2212], 5.00th=[ 2474], 10.00th=[ 2671], 20.00th=[ 2966], 00:08:46.032 | 30.00th=[ 3195], 40.00th=[ 3392], 50.00th=[ 3589], 60.00th=[ 3851], 00:08:46.032 | 70.00th=[ 4424], 80.00th=[ 5276], 90.00th=[ 6128], 95.00th=[ 6718], 00:08:46.032 | 99.00th=[ 7832], 99.50th=[ 8455], 99.90th=[10028], 99.95th=[10552], 00:08:46.032 | 99.99th=[11338] 00:08:46.032 bw ( KiB/s): min=54856, max=74368, per=100.00%, avg=64149.33, stdev=9788.86, samples=3 00:08:46.032 iops : min=13714, max=18592, avg=16037.33, stdev=2447.21, samples=3 00:08:46.032 write: IOPS=15.7k, BW=61.5MiB/s (64.5MB/s)(123MiB/2001msec); 0 zone resets 00:08:46.032 slat (nsec): min=4246, max=90081, avg=6663.19, stdev=3568.28 00:08:46.032 clat (usec): min=648, max=11370, avg=4067.29, stdev=1372.21 00:08:46.032 lat (usec): min=655, max=11383, avg=4073.95, stdev=1373.50 00:08:46.032 clat percentiles (usec): 00:08:46.032 | 1.00th=[ 2245], 5.00th=[ 2507], 10.00th=[ 2704], 20.00th=[ 2999], 00:08:46.032 | 30.00th=[ 3228], 40.00th=[ 3392], 50.00th=[ 3589], 60.00th=[ 3884], 00:08:46.032 | 70.00th=[ 4490], 80.00th=[ 5276], 90.00th=[ 6128], 95.00th=[ 6783], 00:08:46.032 | 99.00th=[ 7898], 99.50th=[ 8717], 99.90th=[10290], 99.95th=[10683], 00:08:46.032 | 99.99th=[11207] 00:08:46.032 bw ( KiB/s): min=54072, max=74112, per=100.00%, avg=63896.00, stdev=10025.75, samples=3 00:08:46.032 iops : min=13518, max=18528, avg=15974.00, stdev=2506.44, samples=3 00:08:46.032 lat (usec) : 750=0.01%, 1000=0.01% 00:08:46.032 lat (msec) : 2=0.38%, 4=63.21%, 10=36.26%, 20=0.13% 00:08:46.032 cpu : usr=98.55%, sys=0.20%, ctx=3, majf=0, minf=605 00:08:46.032 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:46.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:46.032 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:46.032 issued rwts: total=31465,31502,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:46.032 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:46.032 00:08:46.032 Run status group 0 (all jobs): 00:08:46.032 READ: bw=61.4MiB/s (64.4MB/s), 61.4MiB/s-61.4MiB/s (64.4MB/s-64.4MB/s), io=123MiB (129MB), run=2001-2001msec 00:08:46.032 WRITE: bw=61.5MiB/s (64.5MB/s), 61.5MiB/s-61.5MiB/s (64.5MB/s-64.5MB/s), io=123MiB (129MB), run=2001-2001msec 00:08:46.032 ----------------------------------------------------- 00:08:46.032 Suppressions used: 00:08:46.032 count bytes template 00:08:46.032 1 32 /usr/src/fio/parse.c 00:08:46.032 1 8 libtcmalloc_minimal.so 00:08:46.032 ----------------------------------------------------- 00:08:46.032 00:08:46.032 ************************************ 00:08:46.032 END TEST nvme_fio 00:08:46.032 ************************************ 00:08:46.032 17:38:35 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:46.032 17:38:35 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:08:46.032 00:08:46.032 real 0m27.116s 00:08:46.032 user 0m16.142s 00:08:46.032 sys 0m19.764s 00:08:46.032 17:38:35 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:46.032 17:38:35 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:08:46.032 ************************************ 00:08:46.032 END TEST nvme 00:08:46.032 ************************************ 00:08:46.032 00:08:46.032 real 1m37.920s 00:08:46.032 user 3m39.005s 00:08:46.032 sys 0m30.957s 00:08:46.032 17:38:35 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:46.032 17:38:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:46.032 17:38:35 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:08:46.032 17:38:35 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:08:46.032 17:38:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:46.032 17:38:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:46.032 17:38:35 -- common/autotest_common.sh@10 -- # set +x 00:08:46.032 ************************************ 00:08:46.032 START TEST nvme_scc 00:08:46.032 ************************************ 00:08:46.032 17:38:35 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:08:46.032 * Looking for test storage... 00:08:46.032 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:46.032 17:38:35 nvme_scc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:46.032 17:38:35 nvme_scc -- common/autotest_common.sh@1691 -- # lcov --version 00:08:46.032 17:38:35 nvme_scc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:46.032 17:38:35 nvme_scc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:46.032 17:38:35 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:46.032 17:38:35 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:46.032 17:38:35 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:46.032 17:38:35 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:08:46.032 17:38:35 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:08:46.032 17:38:35 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:08:46.032 17:38:35 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:08:46.032 17:38:35 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:08:46.032 17:38:35 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:08:46.032 17:38:35 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:08:46.032 17:38:35 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:46.032 17:38:35 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:08:46.032 17:38:35 nvme_scc -- scripts/common.sh@345 -- # : 1 00:08:46.032 17:38:35 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:46.032 17:38:35 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:46.032 17:38:35 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:08:46.032 17:38:35 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:08:46.032 17:38:35 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:46.032 17:38:35 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:08:46.032 17:38:35 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:46.032 17:38:35 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:08:46.032 17:38:35 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:08:46.032 17:38:35 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:46.032 17:38:35 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:08:46.032 17:38:35 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:46.032 17:38:35 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:46.032 17:38:35 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:46.032 17:38:35 nvme_scc -- scripts/common.sh@368 -- # return 0 00:08:46.032 17:38:35 nvme_scc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:46.032 17:38:35 nvme_scc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:46.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.032 --rc genhtml_branch_coverage=1 00:08:46.032 --rc genhtml_function_coverage=1 00:08:46.032 --rc genhtml_legend=1 00:08:46.032 --rc geninfo_all_blocks=1 00:08:46.032 --rc geninfo_unexecuted_blocks=1 00:08:46.032 00:08:46.032 ' 00:08:46.032 17:38:35 nvme_scc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:46.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.033 --rc genhtml_branch_coverage=1 00:08:46.033 --rc genhtml_function_coverage=1 00:08:46.033 --rc genhtml_legend=1 00:08:46.033 --rc geninfo_all_blocks=1 00:08:46.033 --rc geninfo_unexecuted_blocks=1 00:08:46.033 00:08:46.033 ' 00:08:46.033 17:38:35 nvme_scc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:46.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.033 --rc genhtml_branch_coverage=1 00:08:46.033 --rc genhtml_function_coverage=1 00:08:46.033 --rc genhtml_legend=1 00:08:46.033 --rc geninfo_all_blocks=1 00:08:46.033 --rc geninfo_unexecuted_blocks=1 00:08:46.033 00:08:46.033 ' 00:08:46.033 17:38:35 nvme_scc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:46.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.033 --rc genhtml_branch_coverage=1 00:08:46.033 --rc genhtml_function_coverage=1 00:08:46.033 --rc genhtml_legend=1 00:08:46.033 --rc geninfo_all_blocks=1 00:08:46.033 --rc geninfo_unexecuted_blocks=1 00:08:46.033 00:08:46.033 ' 00:08:46.033 17:38:35 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:46.033 17:38:35 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:46.033 17:38:35 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:08:46.033 17:38:35 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:08:46.033 17:38:35 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:46.033 17:38:35 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:08:46.033 17:38:35 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:46.033 17:38:35 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:46.033 17:38:35 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:46.033 17:38:35 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.033 17:38:35 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.033 17:38:35 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.033 17:38:35 nvme_scc -- paths/export.sh@5 -- # export PATH 00:08:46.033 17:38:35 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.033 17:38:35 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:08:46.033 17:38:35 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:08:46.033 17:38:35 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:08:46.033 17:38:35 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:08:46.033 17:38:35 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:08:46.033 17:38:35 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:08:46.033 17:38:35 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:08:46.033 17:38:35 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:08:46.033 17:38:35 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:08:46.033 17:38:35 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:46.033 17:38:35 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:08:46.033 17:38:35 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:08:46.033 17:38:35 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:08:46.033 17:38:35 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:46.293 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:46.553 Waiting for block devices as requested 00:08:46.553 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:46.553 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:46.553 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:46.813 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:52.112 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:52.112 17:38:41 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:08:52.112 17:38:41 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:52.112 17:38:41 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:08:52.112 17:38:41 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:52.112 17:38:41 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.112 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:08:52.113 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.114 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:08:52.115 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:08:52.116 17:38:41 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:52.116 17:38:41 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:08:52.116 17:38:41 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:52.116 17:38:41 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:08:52.116 17:38:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.117 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:08:52.118 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:08:52.119 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.120 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:08:52.121 17:38:41 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:52.121 17:38:41 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:08:52.121 17:38:41 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:52.121 17:38:41 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.121 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:08:52.122 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.123 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.124 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:08:52.125 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:52.126 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:08:52.128 17:38:41 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:52.128 17:38:41 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:08:52.128 17:38:41 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:52.128 17:38:41 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:08:52.128 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:08:52.130 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:08:52.131 17:38:41 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:08:52.131 17:38:41 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:08:52.131 17:38:41 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:08:52.131 17:38:41 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:08:52.131 17:38:41 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:52.705 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:53.276 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:53.276 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:53.276 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:53.276 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:53.276 17:38:42 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:08:53.276 17:38:42 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:53.276 17:38:42 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.276 17:38:42 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:08:53.276 ************************************ 00:08:53.276 START TEST nvme_simple_copy 00:08:53.276 ************************************ 00:08:53.276 17:38:42 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:08:53.536 Initializing NVMe Controllers 00:08:53.536 Attaching to 0000:00:10.0 00:08:53.536 Controller supports SCC. Attached to 0000:00:10.0 00:08:53.536 Namespace ID: 1 size: 6GB 00:08:53.536 Initialization complete. 00:08:53.536 00:08:53.536 Controller QEMU NVMe Ctrl (12340 ) 00:08:53.536 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:08:53.536 Namespace Block Size:4096 00:08:53.536 Writing LBAs 0 to 63 with Random Data 00:08:53.536 Copied LBAs from 0 - 63 to the Destination LBA 256 00:08:53.536 LBAs matching Written Data: 64 00:08:53.536 00:08:53.536 real 0m0.248s 00:08:53.536 user 0m0.086s 00:08:53.536 sys 0m0.061s 00:08:53.536 ************************************ 00:08:53.536 END TEST nvme_simple_copy 00:08:53.536 ************************************ 00:08:53.536 17:38:43 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.536 17:38:43 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:08:53.536 ************************************ 00:08:53.536 END TEST nvme_scc 00:08:53.536 ************************************ 00:08:53.536 00:08:53.536 real 0m7.698s 00:08:53.536 user 0m1.094s 00:08:53.536 sys 0m1.394s 00:08:53.536 17:38:43 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.536 17:38:43 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:08:53.536 17:38:43 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:08:53.536 17:38:43 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:08:53.536 17:38:43 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:08:53.536 17:38:43 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:08:53.536 17:38:43 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:08:53.536 17:38:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:53.536 17:38:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.536 17:38:43 -- common/autotest_common.sh@10 -- # set +x 00:08:53.536 ************************************ 00:08:53.536 START TEST nvme_fdp 00:08:53.536 ************************************ 00:08:53.536 17:38:43 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:08:53.798 * Looking for test storage... 00:08:53.798 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:53.798 17:38:43 nvme_fdp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:53.798 17:38:43 nvme_fdp -- common/autotest_common.sh@1691 -- # lcov --version 00:08:53.798 17:38:43 nvme_fdp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:53.798 17:38:43 nvme_fdp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:53.798 17:38:43 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:53.798 17:38:43 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:53.798 17:38:43 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:53.798 17:38:43 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:08:53.798 17:38:43 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:08:53.798 17:38:43 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:08:53.798 17:38:43 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:08:53.798 17:38:43 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:08:53.798 17:38:43 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:08:53.798 17:38:43 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:08:53.798 17:38:43 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:53.798 17:38:43 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:08:53.798 17:38:43 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:08:53.798 17:38:43 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:53.798 17:38:43 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:53.798 17:38:43 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:08:53.798 17:38:43 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:08:53.798 17:38:43 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:53.798 17:38:43 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:08:53.798 17:38:43 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:53.798 17:38:43 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:08:53.798 17:38:43 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:08:53.798 17:38:43 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:53.798 17:38:43 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:08:53.798 17:38:43 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:53.798 17:38:43 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:53.798 17:38:43 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:53.798 17:38:43 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:08:53.798 17:38:43 nvme_fdp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:53.798 17:38:43 nvme_fdp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:53.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.798 --rc genhtml_branch_coverage=1 00:08:53.798 --rc genhtml_function_coverage=1 00:08:53.798 --rc genhtml_legend=1 00:08:53.798 --rc geninfo_all_blocks=1 00:08:53.798 --rc geninfo_unexecuted_blocks=1 00:08:53.798 00:08:53.798 ' 00:08:53.798 17:38:43 nvme_fdp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:53.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.798 --rc genhtml_branch_coverage=1 00:08:53.798 --rc genhtml_function_coverage=1 00:08:53.798 --rc genhtml_legend=1 00:08:53.798 --rc geninfo_all_blocks=1 00:08:53.798 --rc geninfo_unexecuted_blocks=1 00:08:53.798 00:08:53.798 ' 00:08:53.798 17:38:43 nvme_fdp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:53.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.798 --rc genhtml_branch_coverage=1 00:08:53.798 --rc genhtml_function_coverage=1 00:08:53.798 --rc genhtml_legend=1 00:08:53.798 --rc geninfo_all_blocks=1 00:08:53.798 --rc geninfo_unexecuted_blocks=1 00:08:53.798 00:08:53.798 ' 00:08:53.798 17:38:43 nvme_fdp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:53.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.798 --rc genhtml_branch_coverage=1 00:08:53.798 --rc genhtml_function_coverage=1 00:08:53.798 --rc genhtml_legend=1 00:08:53.798 --rc geninfo_all_blocks=1 00:08:53.798 --rc geninfo_unexecuted_blocks=1 00:08:53.798 00:08:53.798 ' 00:08:53.799 17:38:43 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:53.799 17:38:43 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:53.799 17:38:43 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:08:53.799 17:38:43 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:08:53.799 17:38:43 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:53.799 17:38:43 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:08:53.799 17:38:43 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.799 17:38:43 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.799 17:38:43 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.799 17:38:43 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.799 17:38:43 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.799 17:38:43 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.799 17:38:43 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:08:53.799 17:38:43 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.799 17:38:43 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:08:53.799 17:38:43 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:08:53.799 17:38:43 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:08:53.799 17:38:43 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:08:53.799 17:38:43 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:08:53.799 17:38:43 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:08:53.799 17:38:43 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:08:53.799 17:38:43 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:08:53.799 17:38:43 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:08:53.799 17:38:43 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:53.799 17:38:43 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:54.059 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:54.320 Waiting for block devices as requested 00:08:54.320 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:54.320 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:54.320 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:54.582 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:59.911 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:59.911 17:38:49 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:08:59.911 17:38:49 nvme_fdp -- scripts/common.sh@18 -- # local i 00:08:59.911 17:38:49 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:08:59.911 17:38:49 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:59.911 17:38:49 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:08:59.911 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.912 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.913 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.914 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:08:59.915 17:38:49 nvme_fdp -- scripts/common.sh@18 -- # local i 00:08:59.915 17:38:49 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:08:59.915 17:38:49 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:59.915 17:38:49 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:59.915 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.916 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.917 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.918 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:08:59.919 17:38:49 nvme_fdp -- scripts/common.sh@18 -- # local i 00:08:59.919 17:38:49 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:08:59.919 17:38:49 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:59.919 17:38:49 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.919 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.920 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:08:59.921 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:08:59.922 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:59.923 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.924 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:59.925 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:08:59.926 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:08:59.927 17:38:49 nvme_fdp -- scripts/common.sh@18 -- # local i 00:08:59.927 17:38:49 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:08:59.927 17:38:49 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:59.927 17:38:49 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:59.927 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:08:59.928 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.929 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:08:59.930 17:38:49 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:08:59.930 17:38:49 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:08:59.930 17:38:49 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:08:59.930 17:38:49 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:08:59.930 17:38:49 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:00.192 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:00.765 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:00.765 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:00.765 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:00.765 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:01.025 17:38:50 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:01.025 17:38:50 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:01.025 17:38:50 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:01.025 17:38:50 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:01.025 ************************************ 00:09:01.025 START TEST nvme_flexible_data_placement 00:09:01.025 ************************************ 00:09:01.025 17:38:50 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:01.286 Initializing NVMe Controllers 00:09:01.286 Attaching to 0000:00:13.0 00:09:01.286 Controller supports FDP Attached to 0000:00:13.0 00:09:01.286 Namespace ID: 1 Endurance Group ID: 1 00:09:01.286 Initialization complete. 00:09:01.286 00:09:01.286 ================================== 00:09:01.286 == FDP tests for Namespace: #01 == 00:09:01.286 ================================== 00:09:01.286 00:09:01.286 Get Feature: FDP: 00:09:01.286 ================= 00:09:01.286 Enabled: Yes 00:09:01.286 FDP configuration Index: 0 00:09:01.286 00:09:01.286 FDP configurations log page 00:09:01.286 =========================== 00:09:01.286 Number of FDP configurations: 1 00:09:01.286 Version: 0 00:09:01.286 Size: 112 00:09:01.286 FDP Configuration Descriptor: 0 00:09:01.286 Descriptor Size: 96 00:09:01.286 Reclaim Group Identifier format: 2 00:09:01.286 FDP Volatile Write Cache: Not Present 00:09:01.286 FDP Configuration: Valid 00:09:01.286 Vendor Specific Size: 0 00:09:01.286 Number of Reclaim Groups: 2 00:09:01.286 Number of Recalim Unit Handles: 8 00:09:01.286 Max Placement Identifiers: 128 00:09:01.286 Number of Namespaces Suppprted: 256 00:09:01.286 Reclaim unit Nominal Size: 6000000 bytes 00:09:01.286 Estimated Reclaim Unit Time Limit: Not Reported 00:09:01.286 RUH Desc #000: RUH Type: Initially Isolated 00:09:01.286 RUH Desc #001: RUH Type: Initially Isolated 00:09:01.286 RUH Desc #002: RUH Type: Initially Isolated 00:09:01.286 RUH Desc #003: RUH Type: Initially Isolated 00:09:01.286 RUH Desc #004: RUH Type: Initially Isolated 00:09:01.286 RUH Desc #005: RUH Type: Initially Isolated 00:09:01.286 RUH Desc #006: RUH Type: Initially Isolated 00:09:01.286 RUH Desc #007: RUH Type: Initially Isolated 00:09:01.286 00:09:01.286 FDP reclaim unit handle usage log page 00:09:01.286 ====================================== 00:09:01.286 Number of Reclaim Unit Handles: 8 00:09:01.286 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:01.286 RUH Usage Desc #001: RUH Attributes: Unused 00:09:01.286 RUH Usage Desc #002: RUH Attributes: Unused 00:09:01.286 RUH Usage Desc #003: RUH Attributes: Unused 00:09:01.286 RUH Usage Desc #004: RUH Attributes: Unused 00:09:01.286 RUH Usage Desc #005: RUH Attributes: Unused 00:09:01.286 RUH Usage Desc #006: RUH Attributes: Unused 00:09:01.286 RUH Usage Desc #007: RUH Attributes: Unused 00:09:01.286 00:09:01.286 FDP statistics log page 00:09:01.286 ======================= 00:09:01.286 Host bytes with metadata written: 1110478848 00:09:01.286 Media bytes with metadata written: 1110724608 00:09:01.286 Media bytes erased: 0 00:09:01.286 00:09:01.286 FDP Reclaim unit handle status 00:09:01.286 ============================== 00:09:01.286 Number of RUHS descriptors: 2 00:09:01.286 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000005cf7 00:09:01.286 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:01.286 00:09:01.286 FDP write on placement id: 0 success 00:09:01.286 00:09:01.286 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:01.286 00:09:01.286 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:01.286 00:09:01.286 Get Feature: FDP Events for Placement handle: #0 00:09:01.286 ======================== 00:09:01.286 Number of FDP Events: 6 00:09:01.286 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:01.286 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:01.286 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:01.286 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:01.286 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:01.286 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:01.286 00:09:01.286 FDP events log page 00:09:01.286 =================== 00:09:01.286 Number of FDP events: 1 00:09:01.286 FDP Event #0: 00:09:01.286 Event Type: RU Not Written to Capacity 00:09:01.286 Placement Identifier: Valid 00:09:01.286 NSID: Valid 00:09:01.286 Location: Valid 00:09:01.286 Placement Identifier: 0 00:09:01.286 Event Timestamp: 6 00:09:01.286 Namespace Identifier: 1 00:09:01.286 Reclaim Group Identifier: 0 00:09:01.286 Reclaim Unit Handle Identifier: 0 00:09:01.286 00:09:01.286 FDP test passed 00:09:01.286 00:09:01.286 real 0m0.220s 00:09:01.286 user 0m0.061s 00:09:01.286 sys 0m0.059s 00:09:01.286 ************************************ 00:09:01.286 END TEST nvme_flexible_data_placement 00:09:01.286 ************************************ 00:09:01.286 17:38:50 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:01.286 17:38:50 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:01.286 ************************************ 00:09:01.286 END TEST nvme_fdp 00:09:01.286 ************************************ 00:09:01.286 00:09:01.286 real 0m7.576s 00:09:01.286 user 0m1.000s 00:09:01.286 sys 0m1.413s 00:09:01.286 17:38:50 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:01.286 17:38:50 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:01.286 17:38:50 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:01.286 17:38:50 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:01.286 17:38:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:01.286 17:38:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:01.286 17:38:50 -- common/autotest_common.sh@10 -- # set +x 00:09:01.286 ************************************ 00:09:01.287 START TEST nvme_rpc 00:09:01.287 ************************************ 00:09:01.287 17:38:50 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:01.287 * Looking for test storage... 00:09:01.287 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:01.287 17:38:51 nvme_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:01.287 17:38:51 nvme_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:01.287 17:38:51 nvme_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:09:01.287 17:38:51 nvme_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:01.287 17:38:51 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:01.287 17:38:51 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:01.287 17:38:51 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:01.287 17:38:51 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:01.287 17:38:51 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:01.287 17:38:51 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:01.287 17:38:51 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:01.287 17:38:51 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:01.287 17:38:51 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:01.287 17:38:51 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:01.287 17:38:51 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:01.287 17:38:51 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:01.287 17:38:51 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:01.287 17:38:51 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:01.287 17:38:51 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:01.287 17:38:51 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:01.287 17:38:51 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:01.287 17:38:51 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:01.287 17:38:51 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:01.287 17:38:51 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:01.287 17:38:51 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:01.287 17:38:51 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:01.287 17:38:51 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:01.287 17:38:51 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:01.287 17:38:51 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:01.287 17:38:51 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:01.287 17:38:51 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:01.287 17:38:51 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:01.287 17:38:51 nvme_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:01.287 17:38:51 nvme_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:01.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.287 --rc genhtml_branch_coverage=1 00:09:01.287 --rc genhtml_function_coverage=1 00:09:01.287 --rc genhtml_legend=1 00:09:01.287 --rc geninfo_all_blocks=1 00:09:01.287 --rc geninfo_unexecuted_blocks=1 00:09:01.287 00:09:01.287 ' 00:09:01.287 17:38:51 nvme_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:01.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.287 --rc genhtml_branch_coverage=1 00:09:01.287 --rc genhtml_function_coverage=1 00:09:01.287 --rc genhtml_legend=1 00:09:01.287 --rc geninfo_all_blocks=1 00:09:01.287 --rc geninfo_unexecuted_blocks=1 00:09:01.287 00:09:01.287 ' 00:09:01.287 17:38:51 nvme_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:01.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.287 --rc genhtml_branch_coverage=1 00:09:01.287 --rc genhtml_function_coverage=1 00:09:01.287 --rc genhtml_legend=1 00:09:01.287 --rc geninfo_all_blocks=1 00:09:01.287 --rc geninfo_unexecuted_blocks=1 00:09:01.287 00:09:01.287 ' 00:09:01.287 17:38:51 nvme_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:01.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.287 --rc genhtml_branch_coverage=1 00:09:01.287 --rc genhtml_function_coverage=1 00:09:01.287 --rc genhtml_legend=1 00:09:01.287 --rc geninfo_all_blocks=1 00:09:01.287 --rc geninfo_unexecuted_blocks=1 00:09:01.287 00:09:01.287 ' 00:09:01.287 17:38:51 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:01.287 17:38:51 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:01.287 17:38:51 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:01.287 17:38:51 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:09:01.287 17:38:51 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:01.287 17:38:51 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:01.287 17:38:51 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:01.287 17:38:51 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:09:01.287 17:38:51 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:01.287 17:38:51 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:01.287 17:38:51 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:01.547 17:38:51 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:01.547 17:38:51 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:01.547 17:38:51 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:01.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.547 17:38:51 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:01.547 17:38:51 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=66113 00:09:01.547 17:38:51 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:01.547 17:38:51 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:01.547 17:38:51 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 66113 00:09:01.547 17:38:51 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 66113 ']' 00:09:01.547 17:38:51 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.547 17:38:51 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:01.547 17:38:51 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.548 17:38:51 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:01.548 17:38:51 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.548 [2024-10-13 17:38:51.224647] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:09:01.548 [2024-10-13 17:38:51.224906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66113 ] 00:09:01.807 [2024-10-13 17:38:51.375498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:01.807 [2024-10-13 17:38:51.487695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.807 [2024-10-13 17:38:51.487790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.378 17:38:52 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:02.378 17:38:52 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:02.378 17:38:52 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:02.638 Nvme0n1 00:09:02.638 17:38:52 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:02.638 17:38:52 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:02.898 request: 00:09:02.898 { 00:09:02.898 "bdev_name": "Nvme0n1", 00:09:02.898 "filename": "non_existing_file", 00:09:02.898 "method": "bdev_nvme_apply_firmware", 00:09:02.898 "req_id": 1 00:09:02.898 } 00:09:02.898 Got JSON-RPC error response 00:09:02.898 response: 00:09:02.898 { 00:09:02.898 "code": -32603, 00:09:02.898 "message": "open file failed." 00:09:02.898 } 00:09:02.898 17:38:52 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:02.898 17:38:52 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:02.898 17:38:52 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:03.159 17:38:52 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:03.159 17:38:52 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 66113 00:09:03.159 17:38:52 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 66113 ']' 00:09:03.159 17:38:52 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 66113 00:09:03.159 17:38:52 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:09:03.159 17:38:52 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:03.159 17:38:52 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66113 00:09:03.159 killing process with pid 66113 00:09:03.159 17:38:52 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:03.159 17:38:52 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:03.159 17:38:52 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66113' 00:09:03.159 17:38:52 nvme_rpc -- common/autotest_common.sh@969 -- # kill 66113 00:09:03.159 17:38:52 nvme_rpc -- common/autotest_common.sh@974 -- # wait 66113 00:09:04.538 ************************************ 00:09:04.538 END TEST nvme_rpc 00:09:04.538 ************************************ 00:09:04.538 00:09:04.538 real 0m3.384s 00:09:04.538 user 0m6.353s 00:09:04.538 sys 0m0.551s 00:09:04.538 17:38:54 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:04.538 17:38:54 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.538 17:38:54 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:04.538 17:38:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:04.538 17:38:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:04.538 17:38:54 -- common/autotest_common.sh@10 -- # set +x 00:09:04.799 ************************************ 00:09:04.799 START TEST nvme_rpc_timeouts 00:09:04.799 ************************************ 00:09:04.799 17:38:54 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:04.799 * Looking for test storage... 00:09:04.799 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:04.799 17:38:54 nvme_rpc_timeouts -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:04.799 17:38:54 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:04.799 17:38:54 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lcov --version 00:09:04.799 17:38:54 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:04.799 17:38:54 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:04.799 17:38:54 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:04.799 17:38:54 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:04.799 17:38:54 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:04.799 17:38:54 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:04.799 17:38:54 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:04.799 17:38:54 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:04.799 17:38:54 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:04.799 17:38:54 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:04.799 17:38:54 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:04.799 17:38:54 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:04.799 17:38:54 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:04.799 17:38:54 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:04.799 17:38:54 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:04.799 17:38:54 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:04.799 17:38:54 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:04.799 17:38:54 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:04.799 17:38:54 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:04.799 17:38:54 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:04.799 17:38:54 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:04.799 17:38:54 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:04.799 17:38:54 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:04.799 17:38:54 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:04.799 17:38:54 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:04.799 17:38:54 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:04.799 17:38:54 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:04.799 17:38:54 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:04.799 17:38:54 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:04.799 17:38:54 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:04.799 17:38:54 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:04.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.799 --rc genhtml_branch_coverage=1 00:09:04.799 --rc genhtml_function_coverage=1 00:09:04.799 --rc genhtml_legend=1 00:09:04.799 --rc geninfo_all_blocks=1 00:09:04.799 --rc geninfo_unexecuted_blocks=1 00:09:04.799 00:09:04.799 ' 00:09:04.799 17:38:54 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:04.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.799 --rc genhtml_branch_coverage=1 00:09:04.799 --rc genhtml_function_coverage=1 00:09:04.799 --rc genhtml_legend=1 00:09:04.799 --rc geninfo_all_blocks=1 00:09:04.799 --rc geninfo_unexecuted_blocks=1 00:09:04.799 00:09:04.799 ' 00:09:04.799 17:38:54 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:04.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.799 --rc genhtml_branch_coverage=1 00:09:04.799 --rc genhtml_function_coverage=1 00:09:04.799 --rc genhtml_legend=1 00:09:04.799 --rc geninfo_all_blocks=1 00:09:04.799 --rc geninfo_unexecuted_blocks=1 00:09:04.799 00:09:04.799 ' 00:09:04.799 17:38:54 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:04.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.799 --rc genhtml_branch_coverage=1 00:09:04.799 --rc genhtml_function_coverage=1 00:09:04.799 --rc genhtml_legend=1 00:09:04.799 --rc geninfo_all_blocks=1 00:09:04.799 --rc geninfo_unexecuted_blocks=1 00:09:04.799 00:09:04.799 ' 00:09:04.799 17:38:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:04.799 17:38:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_66179 00:09:04.799 17:38:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_66179 00:09:04.799 17:38:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=66211 00:09:04.799 17:38:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:04.799 17:38:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 66211 00:09:04.799 17:38:54 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 66211 ']' 00:09:04.799 17:38:54 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.799 17:38:54 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:04.799 17:38:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:04.799 17:38:54 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.799 17:38:54 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:04.799 17:38:54 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:04.799 [2024-10-13 17:38:54.585191] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:09:04.799 [2024-10-13 17:38:54.585473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66211 ] 00:09:05.058 [2024-10-13 17:38:54.735848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:05.058 [2024-10-13 17:38:54.847281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.058 [2024-10-13 17:38:54.847343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.991 Checking default timeout settings: 00:09:05.991 17:38:55 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:05.991 17:38:55 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:09:05.991 17:38:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:05.991 17:38:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:05.991 Making settings changes with rpc: 00:09:05.991 17:38:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:05.991 17:38:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:06.248 Check default vs. modified settings: 00:09:06.248 17:38:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:06.248 17:38:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:06.814 17:38:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:06.814 17:38:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:06.814 17:38:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_66179 00:09:06.814 17:38:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:06.814 17:38:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:06.814 17:38:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:06.814 17:38:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:06.814 17:38:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_66179 00:09:06.814 17:38:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:06.814 Setting action_on_timeout is changed as expected. 00:09:06.814 17:38:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:06.814 17:38:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:06.814 17:38:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:06.814 17:38:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:06.814 17:38:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_66179 00:09:06.814 17:38:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:06.814 17:38:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:06.814 17:38:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:06.814 17:38:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_66179 00:09:06.814 17:38:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:06.814 17:38:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:06.814 Setting timeout_us is changed as expected. 00:09:06.814 17:38:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:06.814 17:38:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:06.814 17:38:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:06.814 17:38:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:06.814 17:38:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_66179 00:09:06.814 17:38:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:06.814 17:38:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:06.814 17:38:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:06.814 17:38:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_66179 00:09:06.814 17:38:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:06.814 17:38:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:06.814 Setting timeout_admin_us is changed as expected. 00:09:06.814 17:38:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:06.814 17:38:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:06.814 17:38:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:06.814 17:38:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:06.814 17:38:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_66179 /tmp/settings_modified_66179 00:09:06.814 17:38:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 66211 00:09:06.814 17:38:56 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 66211 ']' 00:09:06.814 17:38:56 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 66211 00:09:06.814 17:38:56 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:09:06.814 17:38:56 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:06.814 17:38:56 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66211 00:09:06.814 killing process with pid 66211 00:09:06.814 17:38:56 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:06.814 17:38:56 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:06.814 17:38:56 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66211' 00:09:06.814 17:38:56 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 66211 00:09:06.814 17:38:56 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 66211 00:09:08.216 RPC TIMEOUT SETTING TEST PASSED. 00:09:08.216 17:38:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:08.216 ************************************ 00:09:08.216 END TEST nvme_rpc_timeouts 00:09:08.216 ************************************ 00:09:08.216 00:09:08.216 real 0m3.418s 00:09:08.216 user 0m6.597s 00:09:08.216 sys 0m0.546s 00:09:08.216 17:38:57 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:08.216 17:38:57 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:08.216 17:38:57 -- spdk/autotest.sh@239 -- # uname -s 00:09:08.216 17:38:57 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:08.216 17:38:57 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:08.216 17:38:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:08.216 17:38:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:08.216 17:38:57 -- common/autotest_common.sh@10 -- # set +x 00:09:08.216 ************************************ 00:09:08.216 START TEST sw_hotplug 00:09:08.216 ************************************ 00:09:08.216 17:38:57 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:08.216 * Looking for test storage... 00:09:08.216 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:08.216 17:38:57 sw_hotplug -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:08.216 17:38:57 sw_hotplug -- common/autotest_common.sh@1691 -- # lcov --version 00:09:08.216 17:38:57 sw_hotplug -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:08.216 17:38:57 sw_hotplug -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:08.216 17:38:57 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:08.216 17:38:57 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:08.216 17:38:57 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:08.216 17:38:57 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:08.216 17:38:57 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:08.216 17:38:57 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:08.216 17:38:57 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:08.216 17:38:57 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:08.216 17:38:57 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:08.216 17:38:57 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:08.216 17:38:57 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:08.216 17:38:57 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:08.216 17:38:57 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:08.216 17:38:57 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:08.216 17:38:57 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:08.216 17:38:57 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:08.216 17:38:57 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:08.216 17:38:57 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:08.216 17:38:57 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:08.216 17:38:57 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:08.216 17:38:57 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:08.216 17:38:57 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:08.216 17:38:57 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:08.216 17:38:57 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:08.216 17:38:57 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:08.216 17:38:57 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:08.216 17:38:57 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:08.216 17:38:57 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:08.216 17:38:57 sw_hotplug -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:08.216 17:38:57 sw_hotplug -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:08.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.216 --rc genhtml_branch_coverage=1 00:09:08.216 --rc genhtml_function_coverage=1 00:09:08.216 --rc genhtml_legend=1 00:09:08.216 --rc geninfo_all_blocks=1 00:09:08.216 --rc geninfo_unexecuted_blocks=1 00:09:08.216 00:09:08.216 ' 00:09:08.216 17:38:57 sw_hotplug -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:08.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.216 --rc genhtml_branch_coverage=1 00:09:08.216 --rc genhtml_function_coverage=1 00:09:08.216 --rc genhtml_legend=1 00:09:08.216 --rc geninfo_all_blocks=1 00:09:08.216 --rc geninfo_unexecuted_blocks=1 00:09:08.216 00:09:08.216 ' 00:09:08.216 17:38:57 sw_hotplug -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:08.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.216 --rc genhtml_branch_coverage=1 00:09:08.216 --rc genhtml_function_coverage=1 00:09:08.216 --rc genhtml_legend=1 00:09:08.216 --rc geninfo_all_blocks=1 00:09:08.216 --rc geninfo_unexecuted_blocks=1 00:09:08.216 00:09:08.216 ' 00:09:08.216 17:38:57 sw_hotplug -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:08.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.217 --rc genhtml_branch_coverage=1 00:09:08.217 --rc genhtml_function_coverage=1 00:09:08.217 --rc genhtml_legend=1 00:09:08.217 --rc geninfo_all_blocks=1 00:09:08.217 --rc geninfo_unexecuted_blocks=1 00:09:08.217 00:09:08.217 ' 00:09:08.217 17:38:57 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:08.485 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:08.746 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:08.746 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:08.746 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:08.746 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:08.746 17:38:58 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:08.746 17:38:58 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:08.746 17:38:58 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:08.746 17:38:58 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:08.746 17:38:58 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:08.746 17:38:58 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:08.746 17:38:58 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:08.746 17:38:58 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:09.007 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:09.269 Waiting for block devices as requested 00:09:09.269 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:09.269 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:09.530 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:09.530 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:14.811 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:14.811 17:39:04 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:14.811 17:39:04 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:15.072 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:15.072 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:15.072 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:15.333 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:15.594 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:15.594 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:15.594 17:39:05 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:15.595 17:39:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:15.595 17:39:05 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:15.595 17:39:05 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:15.595 17:39:05 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=67069 00:09:15.595 17:39:05 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:15.595 17:39:05 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:15.595 17:39:05 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:15.595 17:39:05 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:15.595 17:39:05 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:09:15.595 17:39:05 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:09:15.595 17:39:05 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:09:15.595 17:39:05 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:09:15.595 17:39:05 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:09:15.595 17:39:05 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:15.595 17:39:05 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:15.595 17:39:05 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:15.595 17:39:05 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:15.595 17:39:05 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:15.856 Initializing NVMe Controllers 00:09:15.856 Attaching to 0000:00:10.0 00:09:15.856 Attaching to 0000:00:11.0 00:09:15.856 Attached to 0000:00:11.0 00:09:15.856 Attached to 0000:00:10.0 00:09:15.856 Initialization complete. Starting I/O... 00:09:15.856 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:15.856 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:15.856 00:09:16.798 QEMU NVMe Ctrl (12341 ): 2700 I/Os completed (+2700) 00:09:16.798 QEMU NVMe Ctrl (12340 ): 2700 I/Os completed (+2700) 00:09:16.798 00:09:17.737 QEMU NVMe Ctrl (12341 ): 5986 I/Os completed (+3286) 00:09:17.737 QEMU NVMe Ctrl (12340 ): 6011 I/Os completed (+3311) 00:09:17.737 00:09:19.116 QEMU NVMe Ctrl (12341 ): 9827 I/Os completed (+3841) 00:09:19.116 QEMU NVMe Ctrl (12340 ): 9841 I/Os completed (+3830) 00:09:19.116 00:09:20.050 QEMU NVMe Ctrl (12341 ): 13603 I/Os completed (+3776) 00:09:20.050 QEMU NVMe Ctrl (12340 ): 13632 I/Os completed (+3791) 00:09:20.050 00:09:20.999 QEMU NVMe Ctrl (12341 ): 17330 I/Os completed (+3727) 00:09:20.999 QEMU NVMe Ctrl (12340 ): 17371 I/Os completed (+3739) 00:09:20.999 00:09:21.593 17:39:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:21.593 17:39:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:21.593 17:39:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:21.593 [2024-10-13 17:39:11.356866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:09:21.593 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:21.593 [2024-10-13 17:39:11.357935] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:21.593 [2024-10-13 17:39:11.357979] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:21.593 [2024-10-13 17:39:11.357995] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:21.593 [2024-10-13 17:39:11.358010] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:21.593 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:21.593 [2024-10-13 17:39:11.359386] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:21.593 [2024-10-13 17:39:11.359425] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:21.593 [2024-10-13 17:39:11.359436] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:21.593 [2024-10-13 17:39:11.359449] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:21.593 17:39:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:21.593 17:39:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:21.593 [2024-10-13 17:39:11.381583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:09:21.593 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:21.593 [2024-10-13 17:39:11.382442] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:21.593 [2024-10-13 17:39:11.382474] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:21.593 [2024-10-13 17:39:11.382492] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:21.593 [2024-10-13 17:39:11.382505] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:21.593 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:21.593 [2024-10-13 17:39:11.383801] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:21.593 [2024-10-13 17:39:11.383831] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:21.593 [2024-10-13 17:39:11.383843] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:21.593 [2024-10-13 17:39:11.383854] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:21.593 17:39:11 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:21.593 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:21.593 EAL: Scan for (pci) bus failed. 00:09:21.593 17:39:11 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:21.852 17:39:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:21.852 17:39:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:21.852 17:39:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:21.852 17:39:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:21.852 00:09:21.852 17:39:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:21.852 17:39:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:21.852 17:39:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:21.852 17:39:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:21.852 Attaching to 0000:00:10.0 00:09:21.852 Attached to 0000:00:10.0 00:09:21.852 17:39:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:21.852 17:39:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:21.852 17:39:11 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:21.852 Attaching to 0000:00:11.0 00:09:21.852 Attached to 0000:00:11.0 00:09:22.786 QEMU NVMe Ctrl (12340 ): 3855 I/Os completed (+3855) 00:09:22.786 QEMU NVMe Ctrl (12341 ): 3495 I/Os completed (+3495) 00:09:22.786 00:09:24.160 QEMU NVMe Ctrl (12340 ): 7622 I/Os completed (+3767) 00:09:24.160 QEMU NVMe Ctrl (12341 ): 7327 I/Os completed (+3832) 00:09:24.160 00:09:25.094 QEMU NVMe Ctrl (12340 ): 11396 I/Os completed (+3774) 00:09:25.094 QEMU NVMe Ctrl (12341 ): 11109 I/Os completed (+3782) 00:09:25.094 00:09:26.029 QEMU NVMe Ctrl (12340 ): 15126 I/Os completed (+3730) 00:09:26.029 QEMU NVMe Ctrl (12341 ): 14930 I/Os completed (+3821) 00:09:26.029 00:09:26.964 QEMU NVMe Ctrl (12340 ): 18931 I/Os completed (+3805) 00:09:26.964 QEMU NVMe Ctrl (12341 ): 18778 I/Os completed (+3848) 00:09:26.964 00:09:27.899 QEMU NVMe Ctrl (12340 ): 22671 I/Os completed (+3740) 00:09:27.899 QEMU NVMe Ctrl (12341 ): 22506 I/Os completed (+3728) 00:09:27.899 00:09:28.834 QEMU NVMe Ctrl (12340 ): 26428 I/Os completed (+3757) 00:09:28.834 QEMU NVMe Ctrl (12341 ): 26292 I/Os completed (+3786) 00:09:28.834 00:09:29.767 QEMU NVMe Ctrl (12340 ): 30260 I/Os completed (+3832) 00:09:29.767 QEMU NVMe Ctrl (12341 ): 30126 I/Os completed (+3834) 00:09:29.767 00:09:31.144 QEMU NVMe Ctrl (12340 ): 33790 I/Os completed (+3530) 00:09:31.144 QEMU NVMe Ctrl (12341 ): 33731 I/Os completed (+3605) 00:09:31.144 00:09:32.120 QEMU NVMe Ctrl (12340 ): 37303 I/Os completed (+3513) 00:09:32.120 QEMU NVMe Ctrl (12341 ): 37252 I/Os completed (+3521) 00:09:32.120 00:09:33.064 QEMU NVMe Ctrl (12340 ): 41050 I/Os completed (+3747) 00:09:33.064 QEMU NVMe Ctrl (12341 ): 40986 I/Os completed (+3734) 00:09:33.064 00:09:33.998 QEMU NVMe Ctrl (12340 ): 44788 I/Os completed (+3738) 00:09:33.998 QEMU NVMe Ctrl (12341 ): 44742 I/Os completed (+3756) 00:09:33.998 00:09:33.998 17:39:23 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:33.998 17:39:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:33.998 17:39:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:33.998 17:39:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:33.998 [2024-10-13 17:39:23.633475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:09:33.998 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:33.998 [2024-10-13 17:39:23.634392] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:33.998 [2024-10-13 17:39:23.634435] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:33.998 [2024-10-13 17:39:23.634450] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:33.998 [2024-10-13 17:39:23.634466] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:33.998 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:33.998 [2024-10-13 17:39:23.636062] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:33.998 [2024-10-13 17:39:23.636098] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:33.998 [2024-10-13 17:39:23.636109] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:33.998 [2024-10-13 17:39:23.636121] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:33.998 17:39:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:33.998 17:39:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:33.998 [2024-10-13 17:39:23.655829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:09:33.998 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:33.998 [2024-10-13 17:39:23.656668] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:33.998 [2024-10-13 17:39:23.656701] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:33.998 [2024-10-13 17:39:23.656720] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:33.998 [2024-10-13 17:39:23.656733] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:33.998 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:33.998 [2024-10-13 17:39:23.658048] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:33.998 [2024-10-13 17:39:23.658081] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:33.998 [2024-10-13 17:39:23.658094] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:33.998 [2024-10-13 17:39:23.658106] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:33.998 17:39:23 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:33.998 17:39:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:33.998 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:33.998 EAL: Scan for (pci) bus failed. 00:09:33.998 17:39:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:33.998 17:39:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:33.999 17:39:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:33.999 17:39:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:34.257 17:39:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:34.257 17:39:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:34.257 17:39:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:34.257 17:39:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:34.257 Attaching to 0000:00:10.0 00:09:34.257 Attached to 0000:00:10.0 00:09:34.257 17:39:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:34.257 17:39:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:34.257 17:39:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:34.257 Attaching to 0000:00:11.0 00:09:34.257 Attached to 0000:00:11.0 00:09:34.825 QEMU NVMe Ctrl (12340 ): 2766 I/Os completed (+2766) 00:09:34.826 QEMU NVMe Ctrl (12341 ): 2470 I/Os completed (+2470) 00:09:34.826 00:09:35.767 QEMU NVMe Ctrl (12340 ): 6139 I/Os completed (+3373) 00:09:35.767 QEMU NVMe Ctrl (12341 ): 5862 I/Os completed (+3392) 00:09:35.767 00:09:37.143 QEMU NVMe Ctrl (12340 ): 9351 I/Os completed (+3212) 00:09:37.143 QEMU NVMe Ctrl (12341 ): 9082 I/Os completed (+3220) 00:09:37.143 00:09:38.076 QEMU NVMe Ctrl (12340 ): 13244 I/Os completed (+3893) 00:09:38.076 QEMU NVMe Ctrl (12341 ): 12998 I/Os completed (+3916) 00:09:38.076 00:09:39.011 QEMU NVMe Ctrl (12340 ): 16877 I/Os completed (+3633) 00:09:39.011 QEMU NVMe Ctrl (12341 ): 16627 I/Os completed (+3629) 00:09:39.011 00:09:39.951 QEMU NVMe Ctrl (12340 ): 19613 I/Os completed (+2736) 00:09:39.951 QEMU NVMe Ctrl (12341 ): 19379 I/Os completed (+2752) 00:09:39.951 00:09:40.888 QEMU NVMe Ctrl (12340 ): 22421 I/Os completed (+2808) 00:09:40.888 QEMU NVMe Ctrl (12341 ): 22185 I/Os completed (+2806) 00:09:40.888 00:09:41.823 QEMU NVMe Ctrl (12340 ): 26433 I/Os completed (+4012) 00:09:41.823 QEMU NVMe Ctrl (12341 ): 26197 I/Os completed (+4012) 00:09:41.823 00:09:42.757 QEMU NVMe Ctrl (12340 ): 30235 I/Os completed (+3802) 00:09:42.757 QEMU NVMe Ctrl (12341 ): 29991 I/Os completed (+3794) 00:09:42.757 00:09:44.133 QEMU NVMe Ctrl (12340 ): 34072 I/Os completed (+3837) 00:09:44.133 QEMU NVMe Ctrl (12341 ): 33827 I/Os completed (+3836) 00:09:44.133 00:09:45.067 QEMU NVMe Ctrl (12340 ): 37923 I/Os completed (+3851) 00:09:45.067 QEMU NVMe Ctrl (12341 ): 37660 I/Os completed (+3833) 00:09:45.067 00:09:46.002 QEMU NVMe Ctrl (12340 ): 41777 I/Os completed (+3854) 00:09:46.002 QEMU NVMe Ctrl (12341 ): 41500 I/Os completed (+3840) 00:09:46.002 00:09:46.260 17:39:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:46.260 17:39:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:46.260 17:39:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:46.260 17:39:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:46.260 [2024-10-13 17:39:35.889498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:09:46.260 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:46.260 [2024-10-13 17:39:35.890414] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.260 [2024-10-13 17:39:35.890446] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.260 [2024-10-13 17:39:35.890459] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.260 [2024-10-13 17:39:35.890473] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.260 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:46.260 [2024-10-13 17:39:35.891918] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.260 [2024-10-13 17:39:35.891951] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.260 [2024-10-13 17:39:35.891962] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.260 [2024-10-13 17:39:35.891973] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.260 17:39:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:46.260 17:39:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:46.260 [2024-10-13 17:39:35.912457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:09:46.260 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:46.260 [2024-10-13 17:39:35.913314] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.260 [2024-10-13 17:39:35.913350] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.260 [2024-10-13 17:39:35.913366] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.260 [2024-10-13 17:39:35.913378] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.260 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:46.260 [2024-10-13 17:39:35.914729] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.260 [2024-10-13 17:39:35.914759] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.260 [2024-10-13 17:39:35.914773] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.260 [2024-10-13 17:39:35.914782] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.260 17:39:35 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:46.260 17:39:35 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:46.260 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:46.260 EAL: Scan for (pci) bus failed. 00:09:46.260 17:39:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:46.261 17:39:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:46.261 17:39:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:46.261 17:39:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:46.521 17:39:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:46.521 17:39:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:46.521 17:39:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:46.521 17:39:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:46.521 Attaching to 0000:00:10.0 00:09:46.521 Attached to 0000:00:10.0 00:09:46.521 17:39:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:46.521 17:39:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:46.521 17:39:36 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:46.521 Attaching to 0000:00:11.0 00:09:46.521 Attached to 0000:00:11.0 00:09:46.521 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:46.521 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:46.521 [2024-10-13 17:39:36.165575] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:09:58.759 17:39:48 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:58.759 17:39:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:58.759 17:39:48 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.80 00:09:58.759 17:39:48 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.80 00:09:58.759 17:39:48 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:09:58.759 17:39:48 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.80 00:09:58.759 17:39:48 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.80 2 00:09:58.759 remove_attach_helper took 42.80s to complete (handling 2 nvme drive(s)) 17:39:48 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:05.346 17:39:54 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 67069 00:10:05.346 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (67069) - No such process 00:10:05.346 17:39:54 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 67069 00:10:05.346 17:39:54 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:05.346 17:39:54 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:05.346 17:39:54 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:05.346 17:39:54 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67619 00:10:05.346 17:39:54 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:05.346 17:39:54 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67619 00:10:05.346 17:39:54 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:05.346 17:39:54 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 67619 ']' 00:10:05.346 17:39:54 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.346 17:39:54 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:05.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.346 17:39:54 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.346 17:39:54 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:05.346 17:39:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:05.346 [2024-10-13 17:39:54.255456] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:10:05.346 [2024-10-13 17:39:54.255619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67619 ] 00:10:05.346 [2024-10-13 17:39:54.406135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.346 [2024-10-13 17:39:54.524140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.608 17:39:55 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:05.608 17:39:55 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:10:05.608 17:39:55 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:05.608 17:39:55 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.608 17:39:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:05.608 17:39:55 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.608 17:39:55 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:05.608 17:39:55 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:05.608 17:39:55 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:05.608 17:39:55 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:05.608 17:39:55 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:05.608 17:39:55 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:05.608 17:39:55 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:05.608 17:39:55 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:10:05.608 17:39:55 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:05.608 17:39:55 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:05.608 17:39:55 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:05.608 17:39:55 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:05.608 17:39:55 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:12.186 17:40:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:12.186 17:40:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:12.186 17:40:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:12.186 17:40:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:12.186 17:40:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:12.186 17:40:01 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:12.186 17:40:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:12.186 17:40:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:12.186 17:40:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:12.186 17:40:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:12.186 17:40:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:12.186 17:40:01 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.186 17:40:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:12.186 17:40:01 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.186 17:40:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:12.186 17:40:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:12.186 [2024-10-13 17:40:01.323209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:12.186 [2024-10-13 17:40:01.324414] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.186 [2024-10-13 17:40:01.324448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:12.186 [2024-10-13 17:40:01.324458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:12.186 [2024-10-13 17:40:01.324477] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.186 [2024-10-13 17:40:01.324485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:12.186 [2024-10-13 17:40:01.324493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:12.186 [2024-10-13 17:40:01.324500] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.186 [2024-10-13 17:40:01.324508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:12.186 [2024-10-13 17:40:01.324515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:12.186 [2024-10-13 17:40:01.324525] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.186 [2024-10-13 17:40:01.324532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:12.187 [2024-10-13 17:40:01.324540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:12.187 [2024-10-13 17:40:01.723210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:12.187 [2024-10-13 17:40:01.724359] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.187 [2024-10-13 17:40:01.724387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:12.187 [2024-10-13 17:40:01.724398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:12.187 [2024-10-13 17:40:01.724411] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.187 [2024-10-13 17:40:01.724420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:12.187 [2024-10-13 17:40:01.724427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:12.187 [2024-10-13 17:40:01.724436] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.187 [2024-10-13 17:40:01.724442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:12.187 [2024-10-13 17:40:01.724450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:12.187 [2024-10-13 17:40:01.724457] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.187 [2024-10-13 17:40:01.724465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:12.187 [2024-10-13 17:40:01.724471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:12.187 [2024-10-13 17:40:01.724483] bdev_nvme.c:5390:aer_cb: *WARNING*: AER request execute failed 00:10:12.187 [2024-10-13 17:40:01.724491] bdev_nvme.c:5390:aer_cb: *WARNING*: AER request execute failed 00:10:12.187 [2024-10-13 17:40:01.724498] bdev_nvme.c:5390:aer_cb: *WARNING*: AER request execute failed 00:10:12.187 [2024-10-13 17:40:01.724503] bdev_nvme.c:5390:aer_cb: *WARNING*: AER request execute failed 00:10:12.187 17:40:01 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:12.187 17:40:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:12.187 17:40:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:12.187 17:40:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:12.187 17:40:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:12.187 17:40:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:12.187 17:40:01 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.187 17:40:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:12.187 17:40:01 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.187 17:40:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:12.187 17:40:01 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:12.187 17:40:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:12.187 17:40:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:12.187 17:40:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:12.445 17:40:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:12.445 17:40:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:12.445 17:40:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:12.445 17:40:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:12.445 17:40:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:12.445 17:40:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:12.445 17:40:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:12.445 17:40:02 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:24.659 17:40:14 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:24.659 17:40:14 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:24.659 17:40:14 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:24.659 17:40:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:24.659 17:40:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:24.659 17:40:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:24.659 17:40:14 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.659 17:40:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:24.659 17:40:14 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.659 17:40:14 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:24.659 17:40:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:24.659 17:40:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:24.659 17:40:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:24.659 17:40:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:24.659 17:40:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:24.659 17:40:14 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:24.659 17:40:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:24.659 [2024-10-13 17:40:14.223435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:24.659 17:40:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:24.659 [2024-10-13 17:40:14.225132] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.659 [2024-10-13 17:40:14.225162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:24.659 [2024-10-13 17:40:14.225175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:24.659 [2024-10-13 17:40:14.225196] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.659 [2024-10-13 17:40:14.225206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:24.659 [2024-10-13 17:40:14.225217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:24.659 [2024-10-13 17:40:14.225226] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.659 [2024-10-13 17:40:14.225236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:24.659 [2024-10-13 17:40:14.225244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:24.659 [2024-10-13 17:40:14.225255] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.660 [2024-10-13 17:40:14.225263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:24.660 [2024-10-13 17:40:14.225276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:24.660 17:40:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:24.660 17:40:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:24.660 17:40:14 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.660 17:40:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:24.660 17:40:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:24.660 17:40:14 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.660 17:40:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:24.660 17:40:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:24.921 [2024-10-13 17:40:14.723425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:24.921 [2024-10-13 17:40:14.724826] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.921 [2024-10-13 17:40:14.724864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:24.921 [2024-10-13 17:40:14.724878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:24.921 [2024-10-13 17:40:14.724898] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.921 [2024-10-13 17:40:14.724908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:24.921 [2024-10-13 17:40:14.724915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:24.921 [2024-10-13 17:40:14.724924] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.921 [2024-10-13 17:40:14.724931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:24.921 [2024-10-13 17:40:14.724939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:24.921 [2024-10-13 17:40:14.724946] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.921 [2024-10-13 17:40:14.724954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:24.921 [2024-10-13 17:40:14.724960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.182 17:40:14 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:25.182 17:40:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:25.182 17:40:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:25.182 17:40:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:25.182 17:40:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:25.182 17:40:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:25.182 17:40:14 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.182 17:40:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:25.182 17:40:14 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.182 17:40:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:25.182 17:40:14 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:25.182 17:40:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:25.182 17:40:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:25.182 17:40:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:25.182 17:40:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:25.182 17:40:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:25.182 17:40:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:25.182 17:40:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:25.182 17:40:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:25.443 17:40:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:25.443 17:40:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:25.443 17:40:15 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:37.667 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:37.667 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:37.667 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:37.667 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:37.667 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:37.667 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:37.667 17:40:27 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.667 17:40:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:37.667 17:40:27 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.667 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:37.667 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:37.667 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:37.667 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:37.667 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:37.667 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:37.667 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:37.667 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:37.667 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:37.667 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:37.667 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:37.667 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:37.667 17:40:27 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.667 17:40:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:37.667 [2024-10-13 17:40:27.123651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:37.667 [2024-10-13 17:40:27.124883] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.667 [2024-10-13 17:40:27.124918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.667 [2024-10-13 17:40:27.124930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.667 [2024-10-13 17:40:27.124948] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.667 [2024-10-13 17:40:27.124956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.667 [2024-10-13 17:40:27.124964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.667 [2024-10-13 17:40:27.124971] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.667 [2024-10-13 17:40:27.124979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.667 [2024-10-13 17:40:27.124985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.667 [2024-10-13 17:40:27.124994] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.667 [2024-10-13 17:40:27.125000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.667 [2024-10-13 17:40:27.125008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.667 17:40:27 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.667 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:37.667 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:37.924 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:37.924 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:37.924 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:37.924 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:37.924 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:37.924 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:37.924 17:40:27 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.924 17:40:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:37.924 17:40:27 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.924 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:37.924 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:37.924 [2024-10-13 17:40:27.723658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:37.924 [2024-10-13 17:40:27.724814] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.924 [2024-10-13 17:40:27.724840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.924 [2024-10-13 17:40:27.724851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.924 [2024-10-13 17:40:27.724863] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.924 [2024-10-13 17:40:27.724874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.925 [2024-10-13 17:40:27.724881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.925 [2024-10-13 17:40:27.724889] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.925 [2024-10-13 17:40:27.724896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.925 [2024-10-13 17:40:27.724904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.925 [2024-10-13 17:40:27.724911] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.925 [2024-10-13 17:40:27.724919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.925 [2024-10-13 17:40:27.724925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.491 17:40:28 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:38.491 17:40:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:38.491 17:40:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:38.491 17:40:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:38.491 17:40:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:38.491 17:40:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:38.491 17:40:28 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.491 17:40:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:38.491 17:40:28 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.491 17:40:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:38.491 17:40:28 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:38.751 17:40:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:38.751 17:40:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:38.751 17:40:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:38.751 17:40:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:38.751 17:40:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:38.751 17:40:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:38.751 17:40:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:38.751 17:40:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:38.751 17:40:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:38.751 17:40:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:38.751 17:40:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:50.952 17:40:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:50.952 17:40:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:50.952 17:40:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:50.952 17:40:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:50.952 17:40:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:50.952 17:40:40 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.952 17:40:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:50.952 17:40:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:50.952 17:40:40 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.952 17:40:40 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:50.952 17:40:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:50.952 17:40:40 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.29 00:10:50.952 17:40:40 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.29 00:10:50.952 17:40:40 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:10:50.952 17:40:40 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.29 00:10:50.952 17:40:40 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.29 2 00:10:50.952 remove_attach_helper took 45.29s to complete (handling 2 nvme drive(s)) 17:40:40 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:10:50.952 17:40:40 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.952 17:40:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:50.952 17:40:40 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.952 17:40:40 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:50.952 17:40:40 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.952 17:40:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:50.952 17:40:40 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.952 17:40:40 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:10:50.952 17:40:40 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:50.952 17:40:40 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:50.952 17:40:40 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:50.952 17:40:40 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:50.953 17:40:40 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:50.953 17:40:40 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:50.953 17:40:40 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:10:50.953 17:40:40 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:50.953 17:40:40 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:50.953 17:40:40 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:50.953 17:40:40 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:50.953 17:40:40 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:57.511 17:40:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:57.511 17:40:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:57.511 17:40:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:57.511 17:40:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:57.511 17:40:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:57.511 17:40:46 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:57.511 17:40:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:57.511 17:40:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:57.511 17:40:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:57.511 17:40:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:57.511 17:40:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:57.511 17:40:46 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.511 17:40:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:57.511 17:40:46 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.511 17:40:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:57.511 17:40:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:57.511 [2024-10-13 17:40:46.649826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:57.511 [2024-10-13 17:40:46.650715] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.511 [2024-10-13 17:40:46.650742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.511 [2024-10-13 17:40:46.650753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.511 [2024-10-13 17:40:46.650769] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.511 [2024-10-13 17:40:46.650776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.511 [2024-10-13 17:40:46.650785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.511 [2024-10-13 17:40:46.650793] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.511 [2024-10-13 17:40:46.650804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.511 [2024-10-13 17:40:46.650810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.511 [2024-10-13 17:40:46.650819] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.511 [2024-10-13 17:40:46.650825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.511 [2024-10-13 17:40:46.650835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.511 17:40:47 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:57.511 17:40:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:57.511 17:40:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:57.511 17:40:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:57.511 17:40:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:57.511 17:40:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:57.511 17:40:47 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.511 17:40:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:57.511 [2024-10-13 17:40:47.149825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:57.511 [2024-10-13 17:40:47.150679] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.511 [2024-10-13 17:40:47.150706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.511 [2024-10-13 17:40:47.150716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.511 [2024-10-13 17:40:47.150727] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.511 [2024-10-13 17:40:47.150736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.511 [2024-10-13 17:40:47.150743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.511 [2024-10-13 17:40:47.150752] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.511 [2024-10-13 17:40:47.150759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.511 [2024-10-13 17:40:47.150767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.512 [2024-10-13 17:40:47.150774] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.512 [2024-10-13 17:40:47.150785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.512 [2024-10-13 17:40:47.150791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.512 17:40:47 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.512 17:40:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:57.512 17:40:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:58.078 17:40:47 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:58.078 17:40:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:58.078 17:40:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:58.078 17:40:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:58.078 17:40:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:58.078 17:40:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:58.078 17:40:47 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.078 17:40:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:58.078 17:40:47 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.078 17:40:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:58.078 17:40:47 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:58.078 17:40:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:58.078 17:40:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:58.078 17:40:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:58.078 17:40:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:58.078 17:40:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:58.078 17:40:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:58.078 17:40:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:58.078 17:40:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:58.336 17:40:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:58.336 17:40:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:58.336 17:40:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:10.549 17:40:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:10.549 17:40:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:10.549 17:40:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:10.549 17:40:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:10.549 17:40:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:10.549 17:40:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:10.549 17:40:59 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.549 17:40:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:10.549 17:41:00 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.549 17:41:00 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:10.549 17:41:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:10.549 17:41:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:10.549 17:41:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:10.549 17:41:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:10.549 17:41:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:10.549 17:41:00 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:10.549 17:41:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:10.549 17:41:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:10.549 17:41:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:10.549 17:41:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:10.549 17:41:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:10.549 17:41:00 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.549 17:41:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:10.549 [2024-10-13 17:41:00.050256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:10.549 [2024-10-13 17:41:00.051362] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.549 [2024-10-13 17:41:00.051400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.549 [2024-10-13 17:41:00.051412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.549 [2024-10-13 17:41:00.051433] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.549 [2024-10-13 17:41:00.051442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.549 [2024-10-13 17:41:00.051451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.549 [2024-10-13 17:41:00.051459] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.549 [2024-10-13 17:41:00.051468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.549 [2024-10-13 17:41:00.051475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.549 [2024-10-13 17:41:00.051485] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.549 [2024-10-13 17:41:00.051492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.549 [2024-10-13 17:41:00.051500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.549 17:41:00 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.549 17:41:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:10.549 17:41:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:10.814 [2024-10-13 17:41:00.450251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:10.814 [2024-10-13 17:41:00.451236] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.814 [2024-10-13 17:41:00.451264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.814 [2024-10-13 17:41:00.451277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.814 [2024-10-13 17:41:00.451292] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.814 [2024-10-13 17:41:00.451302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.814 [2024-10-13 17:41:00.451309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.814 [2024-10-13 17:41:00.451320] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.814 [2024-10-13 17:41:00.451328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.814 [2024-10-13 17:41:00.451336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.814 [2024-10-13 17:41:00.451344] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.814 [2024-10-13 17:41:00.451352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.814 [2024-10-13 17:41:00.451358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.814 17:41:00 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:10.814 17:41:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:10.814 17:41:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:10.814 17:41:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:10.814 17:41:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:10.814 17:41:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:10.814 17:41:00 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.814 17:41:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:10.814 17:41:00 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.814 17:41:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:10.814 17:41:00 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:11.075 17:41:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:11.075 17:41:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:11.075 17:41:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:11.075 17:41:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:11.075 17:41:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:11.075 17:41:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:11.075 17:41:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:11.075 17:41:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:11.075 17:41:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:11.075 17:41:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:11.075 17:41:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:23.294 17:41:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:23.294 17:41:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:23.294 17:41:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:23.294 17:41:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:23.294 17:41:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:23.294 17:41:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:23.294 17:41:12 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.294 17:41:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:23.294 17:41:12 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.294 17:41:12 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:23.294 17:41:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:23.294 17:41:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:23.294 17:41:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:23.294 17:41:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:23.294 17:41:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:23.294 17:41:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:23.294 17:41:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:23.294 17:41:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:23.294 17:41:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:23.294 17:41:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:23.294 17:41:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:23.294 17:41:12 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.294 17:41:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:23.294 [2024-10-13 17:41:12.950469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:23.294 [2024-10-13 17:41:12.951443] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.294 [2024-10-13 17:41:12.951486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.294 [2024-10-13 17:41:12.951498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.294 [2024-10-13 17:41:12.951522] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.294 [2024-10-13 17:41:12.951530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.294 [2024-10-13 17:41:12.951538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.294 [2024-10-13 17:41:12.951546] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.294 [2024-10-13 17:41:12.951555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.294 [2024-10-13 17:41:12.951572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.294 [2024-10-13 17:41:12.951582] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.294 [2024-10-13 17:41:12.951589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.294 [2024-10-13 17:41:12.951597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.294 17:41:12 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.294 17:41:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:23.294 17:41:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:23.558 [2024-10-13 17:41:13.350468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:23.558 [2024-10-13 17:41:13.351411] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.558 [2024-10-13 17:41:13.351443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.558 [2024-10-13 17:41:13.351454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.558 [2024-10-13 17:41:13.351467] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.558 [2024-10-13 17:41:13.351476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.558 [2024-10-13 17:41:13.351483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.558 [2024-10-13 17:41:13.351495] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.558 [2024-10-13 17:41:13.351502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.558 [2024-10-13 17:41:13.351509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.558 [2024-10-13 17:41:13.351517] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.558 [2024-10-13 17:41:13.351526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.558 [2024-10-13 17:41:13.351532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.856 17:41:13 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:23.856 17:41:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:23.856 17:41:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:23.856 17:41:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:23.856 17:41:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:23.856 17:41:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:23.856 17:41:13 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.856 17:41:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:23.856 17:41:13 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.856 17:41:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:23.856 17:41:13 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:23.856 17:41:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:23.856 17:41:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:23.856 17:41:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:24.116 17:41:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:24.116 17:41:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:24.116 17:41:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:24.116 17:41:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:24.116 17:41:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:24.116 17:41:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:24.116 17:41:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:24.116 17:41:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:36.372 17:41:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:36.372 17:41:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:36.372 17:41:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:36.372 17:41:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:36.372 17:41:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:36.372 17:41:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.372 17:41:25 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.372 17:41:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.372 17:41:25 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.372 17:41:25 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:36.372 17:41:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:36.372 17:41:25 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.25 00:11:36.372 17:41:25 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.25 00:11:36.372 17:41:25 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:36.372 17:41:25 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.25 00:11:36.372 17:41:25 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.25 2 00:11:36.372 remove_attach_helper took 45.25s to complete (handling 2 nvme drive(s)) 17:41:25 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:11:36.372 17:41:25 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67619 00:11:36.372 17:41:25 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 67619 ']' 00:11:36.372 17:41:25 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 67619 00:11:36.372 17:41:25 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:11:36.372 17:41:25 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:36.372 17:41:25 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67619 00:11:36.372 17:41:25 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:36.372 killing process with pid 67619 00:11:36.372 17:41:25 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:36.372 17:41:25 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67619' 00:11:36.372 17:41:25 sw_hotplug -- common/autotest_common.sh@969 -- # kill 67619 00:11:36.372 17:41:25 sw_hotplug -- common/autotest_common.sh@974 -- # wait 67619 00:11:37.313 17:41:27 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:37.886 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:38.147 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:38.147 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:38.147 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:38.407 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:38.407 00:11:38.407 real 2m30.215s 00:11:38.407 user 1m51.938s 00:11:38.407 sys 0m16.930s 00:11:38.407 17:41:28 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:38.407 17:41:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:38.407 ************************************ 00:11:38.407 END TEST sw_hotplug 00:11:38.407 ************************************ 00:11:38.407 17:41:28 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:11:38.407 17:41:28 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:38.407 17:41:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:38.407 17:41:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:38.407 17:41:28 -- common/autotest_common.sh@10 -- # set +x 00:11:38.407 ************************************ 00:11:38.407 START TEST nvme_xnvme 00:11:38.407 ************************************ 00:11:38.407 17:41:28 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:38.407 * Looking for test storage... 00:11:38.407 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:38.407 17:41:28 nvme_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:38.407 17:41:28 nvme_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:11:38.407 17:41:28 nvme_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:38.669 17:41:28 nvme_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:38.669 17:41:28 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:38.669 17:41:28 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:38.669 17:41:28 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:38.669 17:41:28 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:38.669 17:41:28 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:38.669 17:41:28 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:38.669 17:41:28 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:38.669 17:41:28 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:38.669 17:41:28 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:38.669 17:41:28 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:38.669 17:41:28 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:38.669 17:41:28 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:11:38.669 17:41:28 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:11:38.669 17:41:28 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:38.669 17:41:28 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:38.669 17:41:28 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:11:38.669 17:41:28 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:11:38.669 17:41:28 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:38.669 17:41:28 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:11:38.669 17:41:28 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:38.669 17:41:28 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:11:38.669 17:41:28 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:11:38.669 17:41:28 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:38.669 17:41:28 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:11:38.669 17:41:28 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:38.669 17:41:28 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:38.670 17:41:28 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:38.670 17:41:28 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:11:38.670 17:41:28 nvme_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:38.670 17:41:28 nvme_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:38.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.670 --rc genhtml_branch_coverage=1 00:11:38.670 --rc genhtml_function_coverage=1 00:11:38.670 --rc genhtml_legend=1 00:11:38.670 --rc geninfo_all_blocks=1 00:11:38.670 --rc geninfo_unexecuted_blocks=1 00:11:38.670 00:11:38.670 ' 00:11:38.670 17:41:28 nvme_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:38.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.670 --rc genhtml_branch_coverage=1 00:11:38.670 --rc genhtml_function_coverage=1 00:11:38.670 --rc genhtml_legend=1 00:11:38.670 --rc geninfo_all_blocks=1 00:11:38.670 --rc geninfo_unexecuted_blocks=1 00:11:38.670 00:11:38.670 ' 00:11:38.670 17:41:28 nvme_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:38.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.670 --rc genhtml_branch_coverage=1 00:11:38.670 --rc genhtml_function_coverage=1 00:11:38.670 --rc genhtml_legend=1 00:11:38.670 --rc geninfo_all_blocks=1 00:11:38.670 --rc geninfo_unexecuted_blocks=1 00:11:38.670 00:11:38.670 ' 00:11:38.670 17:41:28 nvme_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:38.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.670 --rc genhtml_branch_coverage=1 00:11:38.670 --rc genhtml_function_coverage=1 00:11:38.670 --rc genhtml_legend=1 00:11:38.670 --rc geninfo_all_blocks=1 00:11:38.670 --rc geninfo_unexecuted_blocks=1 00:11:38.670 00:11:38.670 ' 00:11:38.670 17:41:28 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:38.670 17:41:28 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:11:38.670 17:41:28 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:38.670 17:41:28 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:38.670 17:41:28 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:38.670 17:41:28 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.670 17:41:28 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.670 17:41:28 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.670 17:41:28 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:11:38.670 17:41:28 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.670 17:41:28 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:11:38.670 17:41:28 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:38.670 17:41:28 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:38.670 17:41:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:38.670 ************************************ 00:11:38.670 START TEST xnvme_to_malloc_dd_copy 00:11:38.670 ************************************ 00:11:38.670 17:41:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:11:38.670 17:41:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:11:38.670 17:41:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:11:38.670 17:41:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:11:38.670 17:41:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:11:38.670 17:41:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:11:38.670 17:41:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:11:38.670 17:41:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:38.670 17:41:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:11:38.670 17:41:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:11:38.670 17:41:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:11:38.670 17:41:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:11:38.670 17:41:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:11:38.670 17:41:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:11:38.670 17:41:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:11:38.670 17:41:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:11:38.670 17:41:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:11:38.670 17:41:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:38.670 17:41:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:38.670 17:41:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:38.670 17:41:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:38.670 17:41:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:38.670 17:41:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:38.670 17:41:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:38.670 17:41:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:38.670 { 00:11:38.670 "subsystems": [ 00:11:38.670 { 00:11:38.670 "subsystem": "bdev", 00:11:38.670 "config": [ 00:11:38.670 { 00:11:38.670 "params": { 00:11:38.670 "block_size": 512, 00:11:38.670 "num_blocks": 2097152, 00:11:38.670 "name": "malloc0" 00:11:38.670 }, 00:11:38.670 "method": "bdev_malloc_create" 00:11:38.670 }, 00:11:38.670 { 00:11:38.670 "params": { 00:11:38.670 "io_mechanism": "libaio", 00:11:38.670 "filename": "/dev/nullb0", 00:11:38.670 "name": "null0" 00:11:38.670 }, 00:11:38.670 "method": "bdev_xnvme_create" 00:11:38.670 }, 00:11:38.670 { 00:11:38.670 "method": "bdev_wait_for_examine" 00:11:38.670 } 00:11:38.670 ] 00:11:38.670 } 00:11:38.670 ] 00:11:38.670 } 00:11:38.670 [2024-10-13 17:41:28.354063] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:11:38.670 [2024-10-13 17:41:28.354210] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68999 ] 00:11:38.931 [2024-10-13 17:41:28.513025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.931 [2024-10-13 17:41:28.658030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.479  [2024-10-13T17:41:32.237Z] Copying: 221/1024 [MB] (221 MBps) [2024-10-13T17:41:33.180Z] Copying: 474/1024 [MB] (252 MBps) [2024-10-13T17:41:34.122Z] Copying: 771/1024 [MB] (297 MBps) [2024-10-13T17:41:36.036Z] Copying: 1024/1024 [MB] (average 266 MBps) 00:11:46.222 00:11:46.222 17:41:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:46.222 17:41:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:46.222 17:41:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:46.222 17:41:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:46.222 { 00:11:46.222 "subsystems": [ 00:11:46.222 { 00:11:46.222 "subsystem": "bdev", 00:11:46.222 "config": [ 00:11:46.222 { 00:11:46.222 "params": { 00:11:46.222 "block_size": 512, 00:11:46.222 "num_blocks": 2097152, 00:11:46.222 "name": "malloc0" 00:11:46.222 }, 00:11:46.222 "method": "bdev_malloc_create" 00:11:46.222 }, 00:11:46.222 { 00:11:46.222 "params": { 00:11:46.222 "io_mechanism": "libaio", 00:11:46.222 "filename": "/dev/nullb0", 00:11:46.222 "name": "null0" 00:11:46.222 }, 00:11:46.222 "method": "bdev_xnvme_create" 00:11:46.222 }, 00:11:46.223 { 00:11:46.223 "method": "bdev_wait_for_examine" 00:11:46.223 } 00:11:46.223 ] 00:11:46.223 } 00:11:46.223 ] 00:11:46.223 } 00:11:46.223 [2024-10-13 17:41:35.851323] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:11:46.223 [2024-10-13 17:41:35.851454] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69085 ] 00:11:46.223 [2024-10-13 17:41:36.002531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.483 [2024-10-13 17:41:36.096327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.401  [2024-10-13T17:41:39.182Z] Copying: 299/1024 [MB] (299 MBps) [2024-10-13T17:41:40.123Z] Copying: 599/1024 [MB] (300 MBps) [2024-10-13T17:41:40.384Z] Copying: 900/1024 [MB] (300 MBps) [2024-10-13T17:41:42.928Z] Copying: 1024/1024 [MB] (average 300 MBps) 00:11:53.114 00:11:53.114 17:41:42 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:53.114 17:41:42 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:11:53.114 17:41:42 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:53.114 17:41:42 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:53.114 17:41:42 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:53.114 17:41:42 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:53.114 { 00:11:53.114 "subsystems": [ 00:11:53.114 { 00:11:53.114 "subsystem": "bdev", 00:11:53.114 "config": [ 00:11:53.114 { 00:11:53.114 "params": { 00:11:53.114 "block_size": 512, 00:11:53.114 "num_blocks": 2097152, 00:11:53.114 "name": "malloc0" 00:11:53.114 }, 00:11:53.114 "method": "bdev_malloc_create" 00:11:53.114 }, 00:11:53.114 { 00:11:53.114 "params": { 00:11:53.114 "io_mechanism": "io_uring", 00:11:53.114 "filename": "/dev/nullb0", 00:11:53.114 "name": "null0" 00:11:53.114 }, 00:11:53.114 "method": "bdev_xnvme_create" 00:11:53.114 }, 00:11:53.114 { 00:11:53.114 "method": "bdev_wait_for_examine" 00:11:53.114 } 00:11:53.114 ] 00:11:53.114 } 00:11:53.114 ] 00:11:53.114 } 00:11:53.114 [2024-10-13 17:41:42.490873] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:11:53.114 [2024-10-13 17:41:42.491000] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69172 ] 00:11:53.114 [2024-10-13 17:41:42.639856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.114 [2024-10-13 17:41:42.733335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.028  [2024-10-13T17:41:45.784Z] Copying: 306/1024 [MB] (306 MBps) [2024-10-13T17:41:46.727Z] Copying: 612/1024 [MB] (306 MBps) [2024-10-13T17:41:46.988Z] Copying: 918/1024 [MB] (306 MBps) [2024-10-13T17:41:48.900Z] Copying: 1024/1024 [MB] (average 306 MBps) 00:11:59.086 00:11:59.086 17:41:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:59.086 17:41:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:59.086 17:41:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:59.086 17:41:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:59.347 { 00:11:59.347 "subsystems": [ 00:11:59.347 { 00:11:59.347 "subsystem": "bdev", 00:11:59.347 "config": [ 00:11:59.347 { 00:11:59.347 "params": { 00:11:59.347 "block_size": 512, 00:11:59.347 "num_blocks": 2097152, 00:11:59.347 "name": "malloc0" 00:11:59.347 }, 00:11:59.347 "method": "bdev_malloc_create" 00:11:59.347 }, 00:11:59.347 { 00:11:59.347 "params": { 00:11:59.347 "io_mechanism": "io_uring", 00:11:59.347 "filename": "/dev/nullb0", 00:11:59.347 "name": "null0" 00:11:59.347 }, 00:11:59.347 "method": "bdev_xnvme_create" 00:11:59.347 }, 00:11:59.347 { 00:11:59.347 "method": "bdev_wait_for_examine" 00:11:59.347 } 00:11:59.347 ] 00:11:59.347 } 00:11:59.347 ] 00:11:59.347 } 00:11:59.347 [2024-10-13 17:41:48.965699] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:11:59.347 [2024-10-13 17:41:48.965831] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69248 ] 00:11:59.347 [2024-10-13 17:41:49.118452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.608 [2024-10-13 17:41:49.226763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.522  [2024-10-13T17:41:52.277Z] Copying: 310/1024 [MB] (310 MBps) [2024-10-13T17:41:53.219Z] Copying: 622/1024 [MB] (311 MBps) [2024-10-13T17:41:53.480Z] Copying: 933/1024 [MB] (311 MBps) [2024-10-13T17:41:55.395Z] Copying: 1024/1024 [MB] (average 311 MBps) 00:12:05.581 00:12:05.841 17:41:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:05.841 17:41:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:05.841 00:12:05.841 real 0m27.192s 00:12:05.841 user 0m23.539s 00:12:05.841 sys 0m3.082s 00:12:05.841 17:41:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:05.841 ************************************ 00:12:05.841 END TEST xnvme_to_malloc_dd_copy 00:12:05.841 ************************************ 00:12:05.841 17:41:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:05.841 17:41:55 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:05.841 17:41:55 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:05.841 17:41:55 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:05.841 17:41:55 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:05.841 ************************************ 00:12:05.841 START TEST xnvme_bdevperf 00:12:05.841 ************************************ 00:12:05.841 17:41:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:12:05.841 17:41:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:05.841 17:41:55 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:05.841 17:41:55 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:05.841 17:41:55 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:12:05.841 17:41:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:05.841 17:41:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:05.841 17:41:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:05.841 17:41:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:05.841 17:41:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:05.841 17:41:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:05.841 17:41:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:05.841 17:41:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:05.841 17:41:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:05.841 17:41:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:05.841 17:41:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:05.841 17:41:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:05.841 17:41:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:05.841 17:41:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:05.841 17:41:55 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:05.841 17:41:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:05.841 { 00:12:05.841 "subsystems": [ 00:12:05.841 { 00:12:05.841 "subsystem": "bdev", 00:12:05.841 "config": [ 00:12:05.841 { 00:12:05.841 "params": { 00:12:05.841 "io_mechanism": "libaio", 00:12:05.841 "filename": "/dev/nullb0", 00:12:05.841 "name": "null0" 00:12:05.841 }, 00:12:05.841 "method": "bdev_xnvme_create" 00:12:05.841 }, 00:12:05.841 { 00:12:05.841 "method": "bdev_wait_for_examine" 00:12:05.841 } 00:12:05.841 ] 00:12:05.841 } 00:12:05.841 ] 00:12:05.841 } 00:12:05.841 [2024-10-13 17:41:55.603175] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:12:05.841 [2024-10-13 17:41:55.603293] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69348 ] 00:12:06.102 [2024-10-13 17:41:55.751731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.102 [2024-10-13 17:41:55.842948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.363 Running I/O for 5 seconds... 00:12:08.322 201984.00 IOPS, 789.00 MiB/s [2024-10-13T17:41:59.090Z] 202496.00 IOPS, 791.00 MiB/s [2024-10-13T17:42:00.475Z] 202624.00 IOPS, 791.50 MiB/s [2024-10-13T17:42:01.418Z] 202432.00 IOPS, 790.75 MiB/s 00:12:11.604 Latency(us) 00:12:11.604 [2024-10-13T17:42:01.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:11.604 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:11.604 null0 : 5.00 202369.68 790.51 0.00 0.00 314.05 111.06 2722.26 00:12:11.604 [2024-10-13T17:42:01.418Z] =================================================================================================================== 00:12:11.604 [2024-10-13T17:42:01.418Z] Total : 202369.68 790.51 0.00 0.00 314.05 111.06 2722.26 00:12:11.865 17:42:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:11.865 17:42:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:11.865 17:42:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:11.866 17:42:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:11.866 17:42:01 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:11.866 17:42:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:12.126 { 00:12:12.126 "subsystems": [ 00:12:12.126 { 00:12:12.126 "subsystem": "bdev", 00:12:12.126 "config": [ 00:12:12.126 { 00:12:12.126 "params": { 00:12:12.126 "io_mechanism": "io_uring", 00:12:12.126 "filename": "/dev/nullb0", 00:12:12.126 "name": "null0" 00:12:12.126 }, 00:12:12.126 "method": "bdev_xnvme_create" 00:12:12.126 }, 00:12:12.126 { 00:12:12.126 "method": "bdev_wait_for_examine" 00:12:12.126 } 00:12:12.126 ] 00:12:12.126 } 00:12:12.126 ] 00:12:12.126 } 00:12:12.126 [2024-10-13 17:42:01.733846] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:12:12.126 [2024-10-13 17:42:01.733978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69417 ] 00:12:12.126 [2024-10-13 17:42:01.896007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.386 [2024-10-13 17:42:02.023851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.648 Running I/O for 5 seconds... 00:12:14.532 176192.00 IOPS, 688.25 MiB/s [2024-10-13T17:42:05.732Z] 187264.00 IOPS, 731.50 MiB/s [2024-10-13T17:42:06.674Z] 201877.33 IOPS, 788.58 MiB/s [2024-10-13T17:42:07.616Z] 209168.00 IOPS, 817.06 MiB/s 00:12:17.802 Latency(us) 00:12:17.802 [2024-10-13T17:42:07.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:17.802 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:17.802 null0 : 5.00 213536.32 834.13 0.00 0.00 297.28 145.72 1991.29 00:12:17.802 [2024-10-13T17:42:07.616Z] =================================================================================================================== 00:12:17.802 [2024-10-13T17:42:07.616Z] Total : 213536.32 834.13 0.00 0.00 297.28 145.72 1991.29 00:12:18.373 17:42:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:18.373 17:42:07 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:18.373 00:12:18.373 real 0m12.462s 00:12:18.373 user 0m9.946s 00:12:18.373 sys 0m2.275s 00:12:18.373 17:42:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:18.373 17:42:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:18.373 ************************************ 00:12:18.373 END TEST xnvme_bdevperf 00:12:18.373 ************************************ 00:12:18.373 00:12:18.373 real 0m39.911s 00:12:18.373 user 0m33.595s 00:12:18.373 sys 0m5.472s 00:12:18.373 17:42:08 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:18.373 ************************************ 00:12:18.373 END TEST nvme_xnvme 00:12:18.373 17:42:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:18.373 ************************************ 00:12:18.373 17:42:08 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:18.373 17:42:08 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:18.373 17:42:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:18.373 17:42:08 -- common/autotest_common.sh@10 -- # set +x 00:12:18.373 ************************************ 00:12:18.373 START TEST blockdev_xnvme 00:12:18.373 ************************************ 00:12:18.373 17:42:08 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:18.373 * Looking for test storage... 00:12:18.373 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:18.373 17:42:08 blockdev_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:18.373 17:42:08 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:12:18.373 17:42:08 blockdev_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:18.634 17:42:08 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:18.634 17:42:08 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:18.634 17:42:08 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:18.634 17:42:08 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:18.634 17:42:08 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:18.634 17:42:08 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:18.634 17:42:08 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:18.634 17:42:08 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:18.634 17:42:08 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:18.634 17:42:08 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:18.634 17:42:08 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:18.634 17:42:08 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:18.634 17:42:08 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:18.634 17:42:08 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:12:18.634 17:42:08 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:18.634 17:42:08 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:18.634 17:42:08 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:18.634 17:42:08 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:18.634 17:42:08 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:18.634 17:42:08 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:18.634 17:42:08 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:18.635 17:42:08 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:18.635 17:42:08 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:18.635 17:42:08 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:18.635 17:42:08 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:18.635 17:42:08 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:18.635 17:42:08 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:18.635 17:42:08 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:18.635 17:42:08 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:12:18.635 17:42:08 blockdev_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:18.635 17:42:08 blockdev_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:18.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.635 --rc genhtml_branch_coverage=1 00:12:18.635 --rc genhtml_function_coverage=1 00:12:18.635 --rc genhtml_legend=1 00:12:18.635 --rc geninfo_all_blocks=1 00:12:18.635 --rc geninfo_unexecuted_blocks=1 00:12:18.635 00:12:18.635 ' 00:12:18.635 17:42:08 blockdev_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:18.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.635 --rc genhtml_branch_coverage=1 00:12:18.635 --rc genhtml_function_coverage=1 00:12:18.635 --rc genhtml_legend=1 00:12:18.635 --rc geninfo_all_blocks=1 00:12:18.635 --rc geninfo_unexecuted_blocks=1 00:12:18.635 00:12:18.635 ' 00:12:18.635 17:42:08 blockdev_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:18.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.635 --rc genhtml_branch_coverage=1 00:12:18.635 --rc genhtml_function_coverage=1 00:12:18.635 --rc genhtml_legend=1 00:12:18.635 --rc geninfo_all_blocks=1 00:12:18.635 --rc geninfo_unexecuted_blocks=1 00:12:18.635 00:12:18.635 ' 00:12:18.635 17:42:08 blockdev_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:18.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.635 --rc genhtml_branch_coverage=1 00:12:18.635 --rc genhtml_function_coverage=1 00:12:18.635 --rc genhtml_legend=1 00:12:18.635 --rc geninfo_all_blocks=1 00:12:18.635 --rc geninfo_unexecuted_blocks=1 00:12:18.635 00:12:18.635 ' 00:12:18.635 17:42:08 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:18.635 17:42:08 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:12:18.635 17:42:08 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:18.635 17:42:08 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:18.635 17:42:08 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:18.635 17:42:08 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:18.635 17:42:08 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:18.635 17:42:08 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:18.635 17:42:08 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:12:18.635 17:42:08 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:12:18.635 17:42:08 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:12:18.635 17:42:08 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:12:18.635 17:42:08 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:12:18.635 17:42:08 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:12:18.635 17:42:08 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:12:18.635 17:42:08 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:12:18.635 17:42:08 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:12:18.635 17:42:08 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:12:18.635 17:42:08 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:12:18.635 17:42:08 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:12:18.635 17:42:08 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:12:18.635 17:42:08 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:12:18.635 17:42:08 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:12:18.635 17:42:08 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:12:18.635 17:42:08 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69566 00:12:18.635 17:42:08 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:18.635 17:42:08 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 69566 00:12:18.635 17:42:08 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 69566 ']' 00:12:18.635 17:42:08 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.635 17:42:08 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:18.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.635 17:42:08 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.635 17:42:08 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:18.635 17:42:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:18.635 17:42:08 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:18.635 [2024-10-13 17:42:08.312197] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:12:18.635 [2024-10-13 17:42:08.312366] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69566 ] 00:12:18.895 [2024-10-13 17:42:08.467232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.895 [2024-10-13 17:42:08.578212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.467 17:42:09 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:19.467 17:42:09 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:12:19.467 17:42:09 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:12:19.467 17:42:09 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:12:19.467 17:42:09 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:12:19.467 17:42:09 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:12:19.467 17:42:09 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:19.727 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:19.988 Waiting for block devices as requested 00:12:19.988 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:19.988 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:19.988 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:19.988 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:25.281 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:25.281 17:42:14 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:25.281 17:42:14 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:25.281 17:42:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:12:25.281 17:42:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:25.281 17:42:14 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:25.281 17:42:14 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:25.281 17:42:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:12:25.281 17:42:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:25.281 17:42:14 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:25.281 17:42:14 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:25.281 17:42:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:12:25.281 17:42:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:25.281 17:42:14 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:25.281 17:42:14 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:25.281 17:42:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:12:25.281 17:42:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:25.281 17:42:14 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:25.281 17:42:14 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:25.281 17:42:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:12:25.281 17:42:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:25.281 17:42:14 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:25.281 17:42:14 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:25.281 17:42:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:12:25.281 17:42:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:25.281 17:42:14 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:25.281 17:42:14 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:12:25.281 17:42:14 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:25.281 17:42:14 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:12:25.281 nvme0n1 00:12:25.281 nvme1n1 00:12:25.281 nvme2n1 00:12:25.281 nvme2n2 00:12:25.281 nvme2n3 00:12:25.281 nvme3n1 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.281 17:42:14 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.281 17:42:14 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:12:25.281 17:42:14 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.281 17:42:14 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.281 17:42:14 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.281 17:42:14 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:12:25.281 17:42:14 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.281 17:42:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:25.281 17:42:14 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:12:25.281 17:42:15 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.281 17:42:15 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:12:25.282 17:42:15 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "cd6686d3-b52d-40ee-bf88-6f98c405bbb5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "cd6686d3-b52d-40ee-bf88-6f98c405bbb5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "66b15318-9e62-4a11-9511-e666efe89d31"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "66b15318-9e62-4a11-9511-e666efe89d31",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "bb0e79d7-4cb9-46f3-9502-496d9ed1396b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bb0e79d7-4cb9-46f3-9502-496d9ed1396b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "2188b218-12e8-4943-a33d-0c0078842966"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2188b218-12e8-4943-a33d-0c0078842966",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "456ac2da-8366-43ac-b8d9-bf57c799ae3b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "456ac2da-8366-43ac-b8d9-bf57c799ae3b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "7c70c6e8-5826-403f-9ba2-6a96b10d1705"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "7c70c6e8-5826-403f-9ba2-6a96b10d1705",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:12:25.282 17:42:15 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:12:25.282 17:42:15 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:12:25.282 17:42:15 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:12:25.282 17:42:15 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:12:25.282 17:42:15 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 69566 00:12:25.282 17:42:15 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 69566 ']' 00:12:25.282 17:42:15 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 69566 00:12:25.282 17:42:15 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:12:25.282 17:42:15 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:25.282 17:42:15 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69566 00:12:25.282 killing process with pid 69566 00:12:25.282 17:42:15 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:25.282 17:42:15 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:25.282 17:42:15 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69566' 00:12:25.282 17:42:15 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 69566 00:12:25.282 17:42:15 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 69566 00:12:26.705 17:42:16 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:26.705 17:42:16 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:26.705 17:42:16 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:26.705 17:42:16 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:26.705 17:42:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:26.705 ************************************ 00:12:26.705 START TEST bdev_hello_world 00:12:26.705 ************************************ 00:12:26.705 17:42:16 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:26.705 [2024-10-13 17:42:16.401598] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:12:26.705 [2024-10-13 17:42:16.401867] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69925 ] 00:12:26.966 [2024-10-13 17:42:16.550449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.966 [2024-10-13 17:42:16.638623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.228 [2024-10-13 17:42:16.942048] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:27.228 [2024-10-13 17:42:16.942089] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:12:27.228 [2024-10-13 17:42:16.942102] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:27.228 [2024-10-13 17:42:16.943693] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:27.228 [2024-10-13 17:42:16.943974] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:27.228 [2024-10-13 17:42:16.943990] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:27.228 [2024-10-13 17:42:16.944337] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:27.228 00:12:27.228 [2024-10-13 17:42:16.944354] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:27.800 00:12:27.800 real 0m1.190s 00:12:27.800 user 0m0.904s 00:12:27.800 sys 0m0.175s 00:12:27.800 17:42:17 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:27.800 ************************************ 00:12:27.800 END TEST bdev_hello_world 00:12:27.800 ************************************ 00:12:27.800 17:42:17 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:27.800 17:42:17 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:12:27.800 17:42:17 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:27.800 17:42:17 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:27.800 17:42:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:27.800 ************************************ 00:12:27.800 START TEST bdev_bounds 00:12:27.800 ************************************ 00:12:27.800 17:42:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:12:27.800 17:42:17 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=69956 00:12:27.800 17:42:17 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:27.800 Process bdevio pid: 69956 00:12:27.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.800 17:42:17 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 69956' 00:12:27.800 17:42:17 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 69956 00:12:27.800 17:42:17 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:27.800 17:42:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 69956 ']' 00:12:27.800 17:42:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.800 17:42:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:27.800 17:42:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.800 17:42:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:27.800 17:42:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:28.061 [2024-10-13 17:42:17.654771] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:12:28.061 [2024-10-13 17:42:17.654900] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69956 ] 00:12:28.061 [2024-10-13 17:42:17.806869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:28.323 [2024-10-13 17:42:17.905654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.323 [2024-10-13 17:42:17.905994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.323 [2024-10-13 17:42:17.905998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:28.896 17:42:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:28.896 17:42:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:12:28.896 17:42:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:28.896 I/O targets: 00:12:28.896 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:28.896 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:28.896 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:28.896 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:28.896 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:28.896 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:28.896 00:12:28.896 00:12:28.896 CUnit - A unit testing framework for C - Version 2.1-3 00:12:28.896 http://cunit.sourceforge.net/ 00:12:28.896 00:12:28.896 00:12:28.896 Suite: bdevio tests on: nvme3n1 00:12:28.896 Test: blockdev write read block ...passed 00:12:28.896 Test: blockdev write zeroes read block ...passed 00:12:28.896 Test: blockdev write zeroes read no split ...passed 00:12:28.896 Test: blockdev write zeroes read split ...passed 00:12:28.896 Test: blockdev write zeroes read split partial ...passed 00:12:28.896 Test: blockdev reset ...passed 00:12:28.896 Test: blockdev write read 8 blocks ...passed 00:12:28.896 Test: blockdev write read size > 128k ...passed 00:12:28.896 Test: blockdev write read invalid size ...passed 00:12:28.896 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:28.896 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:28.896 Test: blockdev write read max offset ...passed 00:12:28.896 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:28.896 Test: blockdev writev readv 8 blocks ...passed 00:12:28.896 Test: blockdev writev readv 30 x 1block ...passed 00:12:28.896 Test: blockdev writev readv block ...passed 00:12:28.896 Test: blockdev writev readv size > 128k ...passed 00:12:28.896 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:28.896 Test: blockdev comparev and writev ...passed 00:12:28.896 Test: blockdev nvme passthru rw ...passed 00:12:28.896 Test: blockdev nvme passthru vendor specific ...passed 00:12:28.896 Test: blockdev nvme admin passthru ...passed 00:12:28.896 Test: blockdev copy ...passed 00:12:28.896 Suite: bdevio tests on: nvme2n3 00:12:28.896 Test: blockdev write read block ...passed 00:12:28.896 Test: blockdev write zeroes read block ...passed 00:12:28.896 Test: blockdev write zeroes read no split ...passed 00:12:29.158 Test: blockdev write zeroes read split ...passed 00:12:29.158 Test: blockdev write zeroes read split partial ...passed 00:12:29.158 Test: blockdev reset ...passed 00:12:29.158 Test: blockdev write read 8 blocks ...passed 00:12:29.158 Test: blockdev write read size > 128k ...passed 00:12:29.158 Test: blockdev write read invalid size ...passed 00:12:29.158 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:29.158 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:29.158 Test: blockdev write read max offset ...passed 00:12:29.158 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:29.158 Test: blockdev writev readv 8 blocks ...passed 00:12:29.158 Test: blockdev writev readv 30 x 1block ...passed 00:12:29.158 Test: blockdev writev readv block ...passed 00:12:29.158 Test: blockdev writev readv size > 128k ...passed 00:12:29.158 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:29.158 Test: blockdev comparev and writev ...passed 00:12:29.158 Test: blockdev nvme passthru rw ...passed 00:12:29.158 Test: blockdev nvme passthru vendor specific ...passed 00:12:29.158 Test: blockdev nvme admin passthru ...passed 00:12:29.158 Test: blockdev copy ...passed 00:12:29.158 Suite: bdevio tests on: nvme2n2 00:12:29.158 Test: blockdev write read block ...passed 00:12:29.158 Test: blockdev write zeroes read block ...passed 00:12:29.158 Test: blockdev write zeroes read no split ...passed 00:12:29.158 Test: blockdev write zeroes read split ...passed 00:12:29.158 Test: blockdev write zeroes read split partial ...passed 00:12:29.158 Test: blockdev reset ...passed 00:12:29.158 Test: blockdev write read 8 blocks ...passed 00:12:29.158 Test: blockdev write read size > 128k ...passed 00:12:29.158 Test: blockdev write read invalid size ...passed 00:12:29.158 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:29.158 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:29.158 Test: blockdev write read max offset ...passed 00:12:29.158 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:29.158 Test: blockdev writev readv 8 blocks ...passed 00:12:29.158 Test: blockdev writev readv 30 x 1block ...passed 00:12:29.158 Test: blockdev writev readv block ...passed 00:12:29.158 Test: blockdev writev readv size > 128k ...passed 00:12:29.158 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:29.158 Test: blockdev comparev and writev ...passed 00:12:29.158 Test: blockdev nvme passthru rw ...passed 00:12:29.158 Test: blockdev nvme passthru vendor specific ...passed 00:12:29.158 Test: blockdev nvme admin passthru ...passed 00:12:29.158 Test: blockdev copy ...passed 00:12:29.158 Suite: bdevio tests on: nvme2n1 00:12:29.158 Test: blockdev write read block ...passed 00:12:29.158 Test: blockdev write zeroes read block ...passed 00:12:29.158 Test: blockdev write zeroes read no split ...passed 00:12:29.159 Test: blockdev write zeroes read split ...passed 00:12:29.159 Test: blockdev write zeroes read split partial ...passed 00:12:29.159 Test: blockdev reset ...passed 00:12:29.159 Test: blockdev write read 8 blocks ...passed 00:12:29.159 Test: blockdev write read size > 128k ...passed 00:12:29.159 Test: blockdev write read invalid size ...passed 00:12:29.159 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:29.159 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:29.159 Test: blockdev write read max offset ...passed 00:12:29.159 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:29.159 Test: blockdev writev readv 8 blocks ...passed 00:12:29.159 Test: blockdev writev readv 30 x 1block ...passed 00:12:29.159 Test: blockdev writev readv block ...passed 00:12:29.159 Test: blockdev writev readv size > 128k ...passed 00:12:29.159 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:29.159 Test: blockdev comparev and writev ...passed 00:12:29.159 Test: blockdev nvme passthru rw ...passed 00:12:29.159 Test: blockdev nvme passthru vendor specific ...passed 00:12:29.159 Test: blockdev nvme admin passthru ...passed 00:12:29.159 Test: blockdev copy ...passed 00:12:29.159 Suite: bdevio tests on: nvme1n1 00:12:29.159 Test: blockdev write read block ...passed 00:12:29.159 Test: blockdev write zeroes read block ...passed 00:12:29.159 Test: blockdev write zeroes read no split ...passed 00:12:29.420 Test: blockdev write zeroes read split ...passed 00:12:29.420 Test: blockdev write zeroes read split partial ...passed 00:12:29.420 Test: blockdev reset ...passed 00:12:29.420 Test: blockdev write read 8 blocks ...passed 00:12:29.420 Test: blockdev write read size > 128k ...passed 00:12:29.420 Test: blockdev write read invalid size ...passed 00:12:29.420 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:29.420 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:29.420 Test: blockdev write read max offset ...passed 00:12:29.420 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:29.420 Test: blockdev writev readv 8 blocks ...passed 00:12:29.420 Test: blockdev writev readv 30 x 1block ...passed 00:12:29.420 Test: blockdev writev readv block ...passed 00:12:29.420 Test: blockdev writev readv size > 128k ...passed 00:12:29.420 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:29.420 Test: blockdev comparev and writev ...passed 00:12:29.420 Test: blockdev nvme passthru rw ...passed 00:12:29.420 Test: blockdev nvme passthru vendor specific ...passed 00:12:29.420 Test: blockdev nvme admin passthru ...passed 00:12:29.420 Test: blockdev copy ...passed 00:12:29.420 Suite: bdevio tests on: nvme0n1 00:12:29.420 Test: blockdev write read block ...passed 00:12:29.420 Test: blockdev write zeroes read block ...passed 00:12:29.420 Test: blockdev write zeroes read no split ...passed 00:12:29.420 Test: blockdev write zeroes read split ...passed 00:12:29.420 Test: blockdev write zeroes read split partial ...passed 00:12:29.420 Test: blockdev reset ...passed 00:12:29.420 Test: blockdev write read 8 blocks ...passed 00:12:29.420 Test: blockdev write read size > 128k ...passed 00:12:29.420 Test: blockdev write read invalid size ...passed 00:12:29.420 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:29.420 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:29.420 Test: blockdev write read max offset ...passed 00:12:29.420 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:29.420 Test: blockdev writev readv 8 blocks ...passed 00:12:29.420 Test: blockdev writev readv 30 x 1block ...passed 00:12:29.420 Test: blockdev writev readv block ...passed 00:12:29.420 Test: blockdev writev readv size > 128k ...passed 00:12:29.420 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:29.420 Test: blockdev comparev and writev ...passed 00:12:29.420 Test: blockdev nvme passthru rw ...passed 00:12:29.420 Test: blockdev nvme passthru vendor specific ...passed 00:12:29.420 Test: blockdev nvme admin passthru ...passed 00:12:29.420 Test: blockdev copy ...passed 00:12:29.420 00:12:29.420 Run Summary: Type Total Ran Passed Failed Inactive 00:12:29.420 suites 6 6 n/a 0 0 00:12:29.420 tests 138 138 138 0 0 00:12:29.420 asserts 780 780 780 0 n/a 00:12:29.420 00:12:29.420 Elapsed time = 1.251 seconds 00:12:29.420 0 00:12:29.420 17:42:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 69956 00:12:29.420 17:42:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 69956 ']' 00:12:29.421 17:42:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 69956 00:12:29.421 17:42:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:12:29.421 17:42:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:29.421 17:42:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69956 00:12:29.421 17:42:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:29.421 17:42:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:29.421 17:42:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69956' 00:12:29.421 killing process with pid 69956 00:12:29.421 17:42:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 69956 00:12:29.421 17:42:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 69956 00:12:29.991 17:42:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:12:29.991 00:12:29.991 real 0m2.158s 00:12:29.991 user 0m5.381s 00:12:29.991 sys 0m0.322s 00:12:29.991 17:42:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:29.991 17:42:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:29.991 ************************************ 00:12:29.991 END TEST bdev_bounds 00:12:29.991 ************************************ 00:12:29.991 17:42:19 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:29.991 17:42:19 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:29.991 17:42:19 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:29.991 17:42:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:30.251 ************************************ 00:12:30.251 START TEST bdev_nbd 00:12:30.251 ************************************ 00:12:30.251 17:42:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:30.251 17:42:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:12:30.251 17:42:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:12:30.251 17:42:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:30.251 17:42:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:30.251 17:42:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:30.251 17:42:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:12:30.251 17:42:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:12:30.251 17:42:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:12:30.251 17:42:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:30.251 17:42:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:12:30.251 17:42:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:12:30.251 17:42:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:30.251 17:42:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:12:30.251 17:42:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:30.251 17:42:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:12:30.251 17:42:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=70010 00:12:30.251 17:42:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:30.251 17:42:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 70010 /var/tmp/spdk-nbd.sock 00:12:30.251 17:42:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 70010 ']' 00:12:30.251 17:42:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:30.251 17:42:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:30.251 17:42:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:30.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:30.251 17:42:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:30.252 17:42:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:30.252 17:42:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:30.252 [2024-10-13 17:42:19.886026] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:12:30.252 [2024-10-13 17:42:19.886153] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:30.252 [2024-10-13 17:42:20.038191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.513 [2024-10-13 17:42:20.153433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.084 17:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:31.084 17:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:12:31.084 17:42:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:31.084 17:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:31.084 17:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:31.084 17:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:31.084 17:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:31.084 17:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:31.084 17:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:31.084 17:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:31.084 17:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:31.084 17:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:31.084 17:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:31.084 17:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:31.084 17:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:12:31.345 17:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:31.345 17:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:31.345 17:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:31.345 17:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:31.345 17:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:31.345 17:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:31.345 17:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:31.345 17:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:31.345 17:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:31.345 17:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:31.345 17:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:31.345 17:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:31.345 1+0 records in 00:12:31.345 1+0 records out 00:12:31.345 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000783849 s, 5.2 MB/s 00:12:31.345 17:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.345 17:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:31.345 17:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.345 17:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:31.345 17:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:31.345 17:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:31.345 17:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:31.345 17:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:12:31.605 17:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:31.605 17:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:31.605 17:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:31.605 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:31.605 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:31.605 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:31.605 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:31.605 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:31.605 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:31.605 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:31.605 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:31.605 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:31.605 1+0 records in 00:12:31.605 1+0 records out 00:12:31.605 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00120168 s, 3.4 MB/s 00:12:31.605 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.605 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:31.605 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.605 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:31.605 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:31.605 17:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:31.605 17:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:31.605 17:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:12:31.605 17:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:31.864 17:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:31.864 17:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:31.864 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:12:31.864 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:31.864 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:31.864 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:31.864 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:12:31.864 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:31.864 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:31.864 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:31.864 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:31.864 1+0 records in 00:12:31.864 1+0 records out 00:12:31.864 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000680983 s, 6.0 MB/s 00:12:31.864 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.864 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:31.864 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.864 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:31.864 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:31.864 17:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:31.864 17:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:31.864 17:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:12:31.864 17:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:31.864 17:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:31.864 17:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:31.864 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:12:31.864 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:31.864 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:31.864 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:31.864 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:12:31.864 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:31.864 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:31.864 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:31.864 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:31.864 1+0 records in 00:12:31.864 1+0 records out 00:12:31.864 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00116669 s, 3.5 MB/s 00:12:31.864 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.864 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:31.864 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.864 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:31.864 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:31.865 17:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:31.865 17:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:32.125 17:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:12:32.125 17:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:32.125 17:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:32.125 17:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:32.125 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:12:32.125 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:32.125 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:32.126 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:32.126 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:12:32.126 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:32.126 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:32.126 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:32.126 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:32.126 1+0 records in 00:12:32.126 1+0 records out 00:12:32.126 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000679397 s, 6.0 MB/s 00:12:32.126 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.126 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:32.126 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.126 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:32.126 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:32.126 17:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:32.126 17:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:32.126 17:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:12:32.387 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:32.387 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:32.387 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:32.387 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:12:32.387 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:32.387 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:32.387 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:32.387 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:12:32.387 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:32.387 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:32.387 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:32.387 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:32.387 1+0 records in 00:12:32.387 1+0 records out 00:12:32.387 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000923881 s, 4.4 MB/s 00:12:32.387 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.387 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:32.387 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.387 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:32.387 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:32.387 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:32.387 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:32.387 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:32.648 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:32.648 { 00:12:32.648 "nbd_device": "/dev/nbd0", 00:12:32.648 "bdev_name": "nvme0n1" 00:12:32.648 }, 00:12:32.648 { 00:12:32.648 "nbd_device": "/dev/nbd1", 00:12:32.648 "bdev_name": "nvme1n1" 00:12:32.648 }, 00:12:32.648 { 00:12:32.648 "nbd_device": "/dev/nbd2", 00:12:32.648 "bdev_name": "nvme2n1" 00:12:32.648 }, 00:12:32.648 { 00:12:32.648 "nbd_device": "/dev/nbd3", 00:12:32.648 "bdev_name": "nvme2n2" 00:12:32.648 }, 00:12:32.648 { 00:12:32.648 "nbd_device": "/dev/nbd4", 00:12:32.648 "bdev_name": "nvme2n3" 00:12:32.648 }, 00:12:32.648 { 00:12:32.648 "nbd_device": "/dev/nbd5", 00:12:32.648 "bdev_name": "nvme3n1" 00:12:32.648 } 00:12:32.648 ]' 00:12:32.648 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:32.648 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:32.648 { 00:12:32.648 "nbd_device": "/dev/nbd0", 00:12:32.648 "bdev_name": "nvme0n1" 00:12:32.648 }, 00:12:32.648 { 00:12:32.648 "nbd_device": "/dev/nbd1", 00:12:32.648 "bdev_name": "nvme1n1" 00:12:32.648 }, 00:12:32.648 { 00:12:32.648 "nbd_device": "/dev/nbd2", 00:12:32.648 "bdev_name": "nvme2n1" 00:12:32.648 }, 00:12:32.648 { 00:12:32.648 "nbd_device": "/dev/nbd3", 00:12:32.648 "bdev_name": "nvme2n2" 00:12:32.648 }, 00:12:32.648 { 00:12:32.648 "nbd_device": "/dev/nbd4", 00:12:32.648 "bdev_name": "nvme2n3" 00:12:32.648 }, 00:12:32.648 { 00:12:32.648 "nbd_device": "/dev/nbd5", 00:12:32.648 "bdev_name": "nvme3n1" 00:12:32.648 } 00:12:32.648 ]' 00:12:32.648 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:32.648 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:12:32.648 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:32.648 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:12:32.648 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:32.648 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:32.648 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:32.648 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:32.909 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:32.909 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:32.909 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:32.909 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:32.909 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:32.909 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:32.909 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:32.909 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:32.909 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:32.909 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:33.169 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:33.169 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:33.169 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:33.169 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:33.169 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:33.169 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:33.169 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:33.169 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:33.169 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.169 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:33.430 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:33.430 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:33.430 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:33.430 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:33.430 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:33.430 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:33.430 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:33.430 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:33.430 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.430 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:33.692 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:33.692 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:33.692 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:33.692 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:33.692 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:33.692 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:33.692 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:33.692 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:33.692 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.692 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:33.954 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:33.954 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:33.954 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:33.954 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:33.954 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:33.954 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:33.954 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:33.954 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:33.954 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.954 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:33.954 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:33.954 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:33.954 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:33.954 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:33.954 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:33.954 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:34.215 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:34.215 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:34.215 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:34.215 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:34.215 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:34.215 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:34.215 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:34.215 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:34.215 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:34.215 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:34.215 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:34.215 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:34.215 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:34.215 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:34.475 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:12:34.475 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:34.475 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:12:34.475 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:34.475 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:34.475 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:34.475 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:34.475 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:34.475 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:34.475 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:34.475 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:34.475 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:34.475 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:34.475 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:34.475 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:34.475 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:12:34.475 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:34.475 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:34.475 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:12:34.475 /dev/nbd0 00:12:34.475 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:34.475 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:34.475 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:34.475 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:34.475 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:34.475 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:34.475 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:34.475 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:34.475 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:34.475 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:34.475 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:34.475 1+0 records in 00:12:34.475 1+0 records out 00:12:34.475 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106434 s, 3.8 MB/s 00:12:34.475 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.475 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:34.475 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.475 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:34.475 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:34.475 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:34.475 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:34.475 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:12:34.737 /dev/nbd1 00:12:34.737 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:34.737 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:34.737 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:34.737 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:34.737 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:34.737 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:34.737 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:34.737 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:34.737 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:34.737 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:34.737 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:34.737 1+0 records in 00:12:34.737 1+0 records out 00:12:34.737 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00138399 s, 3.0 MB/s 00:12:34.737 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.737 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:34.737 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.737 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:34.737 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:34.737 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:34.737 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:34.737 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:12:34.998 /dev/nbd10 00:12:34.998 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:34.998 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:34.998 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:12:34.998 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:34.998 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:34.998 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:34.998 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:12:34.998 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:34.998 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:34.998 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:34.998 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:34.998 1+0 records in 00:12:34.998 1+0 records out 00:12:34.998 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112211 s, 3.7 MB/s 00:12:34.998 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.998 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:34.998 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.998 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:34.998 17:42:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:34.998 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:34.998 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:34.998 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:12:35.259 /dev/nbd11 00:12:35.259 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:35.259 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:35.259 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:12:35.259 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:35.259 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:35.259 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:35.259 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:12:35.259 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:35.259 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:35.259 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:35.259 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:35.259 1+0 records in 00:12:35.259 1+0 records out 00:12:35.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108038 s, 3.8 MB/s 00:12:35.259 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.259 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:35.259 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.260 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:35.260 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:35.260 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:35.260 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:35.260 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:12:35.525 /dev/nbd12 00:12:35.525 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:35.525 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:35.525 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:12:35.525 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:35.525 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:35.525 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:35.525 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:12:35.525 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:35.525 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:35.525 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:35.525 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:35.525 1+0 records in 00:12:35.525 1+0 records out 00:12:35.525 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000805342 s, 5.1 MB/s 00:12:35.525 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.525 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:35.525 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.525 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:35.526 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:35.526 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:35.526 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:35.526 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:12:35.787 /dev/nbd13 00:12:35.787 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:35.787 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:35.787 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:12:35.787 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:35.787 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:35.787 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:35.787 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:12:35.787 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:35.787 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:35.787 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:35.787 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:35.787 1+0 records in 00:12:35.787 1+0 records out 00:12:35.787 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107622 s, 3.8 MB/s 00:12:35.787 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.787 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:35.787 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.787 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:35.787 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:35.787 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:35.787 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:35.787 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:35.787 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:35.787 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:36.048 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:36.048 { 00:12:36.048 "nbd_device": "/dev/nbd0", 00:12:36.048 "bdev_name": "nvme0n1" 00:12:36.048 }, 00:12:36.048 { 00:12:36.048 "nbd_device": "/dev/nbd1", 00:12:36.048 "bdev_name": "nvme1n1" 00:12:36.048 }, 00:12:36.048 { 00:12:36.048 "nbd_device": "/dev/nbd10", 00:12:36.048 "bdev_name": "nvme2n1" 00:12:36.048 }, 00:12:36.048 { 00:12:36.048 "nbd_device": "/dev/nbd11", 00:12:36.048 "bdev_name": "nvme2n2" 00:12:36.048 }, 00:12:36.048 { 00:12:36.048 "nbd_device": "/dev/nbd12", 00:12:36.048 "bdev_name": "nvme2n3" 00:12:36.048 }, 00:12:36.048 { 00:12:36.048 "nbd_device": "/dev/nbd13", 00:12:36.048 "bdev_name": "nvme3n1" 00:12:36.048 } 00:12:36.048 ]' 00:12:36.048 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:36.048 { 00:12:36.048 "nbd_device": "/dev/nbd0", 00:12:36.048 "bdev_name": "nvme0n1" 00:12:36.048 }, 00:12:36.048 { 00:12:36.048 "nbd_device": "/dev/nbd1", 00:12:36.048 "bdev_name": "nvme1n1" 00:12:36.048 }, 00:12:36.048 { 00:12:36.048 "nbd_device": "/dev/nbd10", 00:12:36.048 "bdev_name": "nvme2n1" 00:12:36.048 }, 00:12:36.048 { 00:12:36.048 "nbd_device": "/dev/nbd11", 00:12:36.048 "bdev_name": "nvme2n2" 00:12:36.048 }, 00:12:36.048 { 00:12:36.048 "nbd_device": "/dev/nbd12", 00:12:36.048 "bdev_name": "nvme2n3" 00:12:36.048 }, 00:12:36.048 { 00:12:36.048 "nbd_device": "/dev/nbd13", 00:12:36.048 "bdev_name": "nvme3n1" 00:12:36.048 } 00:12:36.048 ]' 00:12:36.048 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:36.048 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:36.048 /dev/nbd1 00:12:36.048 /dev/nbd10 00:12:36.048 /dev/nbd11 00:12:36.048 /dev/nbd12 00:12:36.048 /dev/nbd13' 00:12:36.048 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:36.048 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:36.048 /dev/nbd1 00:12:36.048 /dev/nbd10 00:12:36.048 /dev/nbd11 00:12:36.048 /dev/nbd12 00:12:36.048 /dev/nbd13' 00:12:36.048 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:12:36.048 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:12:36.048 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:12:36.048 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:12:36.048 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:12:36.048 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:36.048 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:36.048 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:36.048 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:36.048 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:36.048 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:36.048 256+0 records in 00:12:36.048 256+0 records out 00:12:36.048 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00700705 s, 150 MB/s 00:12:36.048 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:36.048 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:36.309 256+0 records in 00:12:36.309 256+0 records out 00:12:36.309 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.230266 s, 4.6 MB/s 00:12:36.309 17:42:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:36.309 17:42:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:36.571 256+0 records in 00:12:36.571 256+0 records out 00:12:36.571 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.257361 s, 4.1 MB/s 00:12:36.571 17:42:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:36.571 17:42:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:36.832 256+0 records in 00:12:36.832 256+0 records out 00:12:36.832 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.237057 s, 4.4 MB/s 00:12:36.832 17:42:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:36.832 17:42:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:37.094 256+0 records in 00:12:37.094 256+0 records out 00:12:37.094 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.240965 s, 4.4 MB/s 00:12:37.094 17:42:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:37.094 17:42:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:37.355 256+0 records in 00:12:37.355 256+0 records out 00:12:37.355 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.238799 s, 4.4 MB/s 00:12:37.355 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:37.355 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:37.617 256+0 records in 00:12:37.617 256+0 records out 00:12:37.617 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.200411 s, 5.2 MB/s 00:12:37.618 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:12:37.618 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:37.618 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:37.618 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:37.618 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:37.618 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:37.618 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:37.618 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:37.618 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:37.618 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:37.618 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:37.618 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:37.618 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:37.618 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:37.618 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:37.618 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:37.618 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:37.618 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:37.618 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:37.618 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:37.618 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:37.618 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:37.618 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:37.618 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:37.618 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:37.618 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:37.618 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:37.879 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:37.879 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:37.879 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:37.879 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:37.879 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:37.879 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:37.879 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:37.879 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:37.879 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:37.879 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:38.139 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:38.139 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:38.139 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:38.139 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:38.139 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:38.139 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:38.139 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:38.139 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:38.139 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:38.139 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:38.139 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:38.139 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:38.139 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:38.139 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:38.139 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:38.139 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:38.139 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:38.139 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:38.139 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:38.139 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:38.400 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:38.400 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:38.400 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:38.400 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:38.400 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:38.400 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:38.400 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:38.400 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:38.400 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:38.400 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:38.661 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:38.661 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:38.661 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:38.661 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:38.661 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:38.661 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:38.661 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:38.661 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:38.661 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:38.661 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:38.922 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:38.922 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:38.922 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:38.922 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:38.922 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:38.922 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:38.922 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:38.922 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:38.922 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:38.922 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:38.922 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:39.183 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:39.183 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:39.183 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:39.183 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:39.183 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:39.183 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:39.183 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:39.183 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:39.183 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:39.183 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:12:39.183 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:39.183 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:12:39.183 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:39.183 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:39.183 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:12:39.184 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:39.184 malloc_lvol_verify 00:12:39.444 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:39.444 9a98d94b-f210-4e87-89c1-07ba2b4bb3ac 00:12:39.444 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:39.705 8c677f3c-c854-4d0b-9c46-2b4441f9601c 00:12:39.705 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:39.966 /dev/nbd0 00:12:39.966 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:12:39.966 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:12:39.966 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:12:39.966 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:12:39.966 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:12:39.966 mke2fs 1.47.0 (5-Feb-2023) 00:12:39.966 Discarding device blocks: 0/4096 done 00:12:39.966 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:39.966 00:12:39.966 Allocating group tables: 0/1 done 00:12:39.966 Writing inode tables: 0/1 done 00:12:39.966 Creating journal (1024 blocks): done 00:12:39.966 Writing superblocks and filesystem accounting information: 0/1 done 00:12:39.966 00:12:39.966 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:39.966 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:39.966 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:39.966 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:39.966 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:39.966 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:39.966 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:40.236 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:40.236 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:40.236 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:40.236 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:40.236 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:40.236 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:40.236 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:40.236 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:40.236 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 70010 00:12:40.236 17:42:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 70010 ']' 00:12:40.236 17:42:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 70010 00:12:40.236 17:42:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:12:40.236 17:42:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:40.236 17:42:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70010 00:12:40.236 17:42:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:40.236 17:42:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:40.236 killing process with pid 70010 00:12:40.236 17:42:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70010' 00:12:40.236 17:42:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 70010 00:12:40.236 17:42:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 70010 00:12:40.840 17:42:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:12:40.840 00:12:40.840 real 0m10.689s 00:12:40.840 user 0m14.458s 00:12:40.840 sys 0m3.714s 00:12:40.840 ************************************ 00:12:40.840 END TEST bdev_nbd 00:12:40.840 ************************************ 00:12:40.840 17:42:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:40.840 17:42:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:40.840 17:42:30 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:12:40.840 17:42:30 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:12:40.840 17:42:30 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:12:40.840 17:42:30 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:12:40.840 17:42:30 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:40.840 17:42:30 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:40.840 17:42:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:40.840 ************************************ 00:12:40.840 START TEST bdev_fio 00:12:40.840 ************************************ 00:12:40.840 17:42:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:12:40.840 17:42:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:12:40.840 17:42:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:12:40.840 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:12:40.840 17:42:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:40.840 17:42:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:12:40.840 17:42:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:12:40.840 17:42:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:12:40.840 17:42:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:12:40.840 17:42:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:40.840 17:42:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:12:40.840 17:42:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:12:40.840 17:42:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:12:40.840 17:42:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:12:40.840 17:42:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:40.840 17:42:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:12:40.841 ************************************ 00:12:40.841 START TEST bdev_fio_rw_verify 00:12:40.841 ************************************ 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:12:40.841 17:42:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:12:41.103 17:42:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:41.103 17:42:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:41.103 17:42:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:12:41.103 17:42:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:41.103 17:42:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:41.103 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:41.103 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:41.103 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:41.103 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:41.103 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:41.103 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:41.103 fio-3.35 00:12:41.103 Starting 6 threads 00:12:53.345 00:12:53.345 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=70417: Sun Oct 13 17:42:41 2024 00:12:53.345 read: IOPS=14.0k, BW=54.6MiB/s (57.3MB/s)(546MiB/10001msec) 00:12:53.346 slat (usec): min=2, max=2041, avg= 7.04, stdev=19.69 00:12:53.346 clat (usec): min=71, max=7464, avg=1363.60, stdev=827.51 00:12:53.346 lat (usec): min=75, max=7476, avg=1370.65, stdev=828.29 00:12:53.346 clat percentiles (usec): 00:12:53.346 | 50.000th=[ 1237], 99.000th=[ 3982], 99.900th=[ 5342], 99.990th=[ 7373], 00:12:53.346 | 99.999th=[ 7439] 00:12:53.346 write: IOPS=14.2k, BW=55.5MiB/s (58.2MB/s)(555MiB/10001msec); 0 zone resets 00:12:53.346 slat (usec): min=13, max=5248, avg=46.60, stdev=164.10 00:12:53.346 clat (usec): min=80, max=11689, avg=1688.57, stdev=938.13 00:12:53.346 lat (usec): min=100, max=12168, avg=1735.17, stdev=953.09 00:12:53.346 clat percentiles (usec): 00:12:53.346 | 50.000th=[ 1549], 99.000th=[ 4621], 99.900th=[ 6390], 99.990th=[10552], 00:12:53.346 | 99.999th=[11731] 00:12:53.346 bw ( KiB/s): min=46142, max=77347, per=99.50%, avg=56515.26, stdev=1442.27, samples=114 00:12:53.346 iops : min=11534, max=19336, avg=14127.68, stdev=360.63, samples=114 00:12:53.346 lat (usec) : 100=0.01%, 250=2.24%, 500=7.93%, 750=9.33%, 1000=11.03% 00:12:53.346 lat (msec) : 2=44.72%, 4=23.15%, 10=1.59%, 20=0.01% 00:12:53.346 cpu : usr=41.87%, sys=34.10%, ctx=5185, majf=0, minf=14309 00:12:53.346 IO depths : 1=11.0%, 2=23.3%, 4=51.5%, 8=14.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:53.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:53.346 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:53.346 issued rwts: total=139898,142015,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:53.346 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:53.346 00:12:53.346 Run status group 0 (all jobs): 00:12:53.346 READ: bw=54.6MiB/s (57.3MB/s), 54.6MiB/s-54.6MiB/s (57.3MB/s-57.3MB/s), io=546MiB (573MB), run=10001-10001msec 00:12:53.346 WRITE: bw=55.5MiB/s (58.2MB/s), 55.5MiB/s-55.5MiB/s (58.2MB/s-58.2MB/s), io=555MiB (582MB), run=10001-10001msec 00:12:53.346 ----------------------------------------------------- 00:12:53.346 Suppressions used: 00:12:53.346 count bytes template 00:12:53.346 6 48 /usr/src/fio/parse.c 00:12:53.346 2025 194400 /usr/src/fio/iolog.c 00:12:53.346 1 8 libtcmalloc_minimal.so 00:12:53.346 1 904 libcrypto.so 00:12:53.346 ----------------------------------------------------- 00:12:53.346 00:12:53.346 00:12:53.346 real 0m11.977s 00:12:53.346 user 0m26.675s 00:12:53.346 sys 0m20.769s 00:12:53.346 ************************************ 00:12:53.346 END TEST bdev_fio_rw_verify 00:12:53.346 ************************************ 00:12:53.346 17:42:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:53.346 17:42:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:12:53.346 17:42:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:12:53.346 17:42:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:53.346 17:42:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:12:53.346 17:42:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:53.346 17:42:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:12:53.346 17:42:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:12:53.346 17:42:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:12:53.346 17:42:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:12:53.346 17:42:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:53.346 17:42:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:12:53.346 17:42:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:12:53.346 17:42:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:53.346 17:42:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:12:53.346 17:42:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:12:53.346 17:42:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:12:53.346 17:42:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:12:53.346 17:42:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:53.346 17:42:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "cd6686d3-b52d-40ee-bf88-6f98c405bbb5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "cd6686d3-b52d-40ee-bf88-6f98c405bbb5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "66b15318-9e62-4a11-9511-e666efe89d31"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "66b15318-9e62-4a11-9511-e666efe89d31",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "bb0e79d7-4cb9-46f3-9502-496d9ed1396b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bb0e79d7-4cb9-46f3-9502-496d9ed1396b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "2188b218-12e8-4943-a33d-0c0078842966"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2188b218-12e8-4943-a33d-0c0078842966",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "456ac2da-8366-43ac-b8d9-bf57c799ae3b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "456ac2da-8366-43ac-b8d9-bf57c799ae3b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "7c70c6e8-5826-403f-9ba2-6a96b10d1705"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "7c70c6e8-5826-403f-9ba2-6a96b10d1705",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:12:53.346 17:42:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:12:53.346 17:42:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:53.346 /home/vagrant/spdk_repo/spdk 00:12:53.346 17:42:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:12:53.346 17:42:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:12:53.346 17:42:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:12:53.346 00:12:53.346 real 0m12.155s 00:12:53.346 user 0m26.756s 00:12:53.346 sys 0m20.843s 00:12:53.346 17:42:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:53.346 17:42:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:12:53.346 ************************************ 00:12:53.346 END TEST bdev_fio 00:12:53.346 ************************************ 00:12:53.346 17:42:42 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:53.346 17:42:42 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:53.346 17:42:42 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:12:53.346 17:42:42 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:53.346 17:42:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:53.346 ************************************ 00:12:53.346 START TEST bdev_verify 00:12:53.346 ************************************ 00:12:53.346 17:42:42 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:53.346 [2024-10-13 17:42:42.867920] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:12:53.346 [2024-10-13 17:42:42.868089] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70595 ] 00:12:53.346 [2024-10-13 17:42:43.026486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:53.607 [2024-10-13 17:42:43.172163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.607 [2024-10-13 17:42:43.172250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.868 Running I/O for 5 seconds... 00:12:56.193 22016.00 IOPS, 86.00 MiB/s [2024-10-13T17:42:46.950Z] 22384.00 IOPS, 87.44 MiB/s [2024-10-13T17:42:47.894Z] 23252.33 IOPS, 90.83 MiB/s [2024-10-13T17:42:48.837Z] 23679.25 IOPS, 92.50 MiB/s [2024-10-13T17:42:48.837Z] 23423.40 IOPS, 91.50 MiB/s 00:12:59.023 Latency(us) 00:12:59.023 [2024-10-13T17:42:48.837Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:59.023 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:59.023 Verification LBA range: start 0x0 length 0xa0000 00:12:59.023 nvme0n1 : 5.04 1829.25 7.15 0.00 0.00 69839.85 10132.87 67754.14 00:12:59.023 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:59.023 Verification LBA range: start 0xa0000 length 0xa0000 00:12:59.023 nvme0n1 : 5.06 1898.34 7.42 0.00 0.00 67287.54 9023.80 72190.42 00:12:59.023 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:59.023 Verification LBA range: start 0x0 length 0xbd0bd 00:12:59.023 nvme1n1 : 5.05 2292.28 8.95 0.00 0.00 55455.02 4209.43 70577.23 00:12:59.023 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:59.023 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:12:59.023 nvme1n1 : 5.08 2317.59 9.05 0.00 0.00 54992.85 4133.81 70980.53 00:12:59.023 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:59.023 Verification LBA range: start 0x0 length 0x80000 00:12:59.023 nvme2n1 : 5.05 1875.18 7.32 0.00 0.00 67910.34 10788.23 75013.51 00:12:59.023 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:59.023 Verification LBA range: start 0x80000 length 0x80000 00:12:59.023 nvme2n1 : 5.05 1924.75 7.52 0.00 0.00 65860.50 9326.28 70980.53 00:12:59.023 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:59.023 Verification LBA range: start 0x0 length 0x80000 00:12:59.023 nvme2n2 : 5.06 1847.62 7.22 0.00 0.00 68652.43 13208.02 69770.63 00:12:59.023 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:59.023 Verification LBA range: start 0x80000 length 0x80000 00:12:59.023 nvme2n2 : 5.06 1895.92 7.41 0.00 0.00 66724.38 14115.45 68157.44 00:12:59.023 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:59.023 Verification LBA range: start 0x0 length 0x80000 00:12:59.023 nvme2n3 : 5.07 1843.31 7.20 0.00 0.00 68635.24 7410.61 69770.63 00:12:59.023 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:59.023 Verification LBA range: start 0x80000 length 0x80000 00:12:59.023 nvme2n3 : 5.09 1910.49 7.46 0.00 0.00 66089.96 6225.92 74206.92 00:12:59.023 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:59.023 Verification LBA range: start 0x0 length 0x20000 00:12:59.023 nvme3n1 : 5.07 1842.77 7.20 0.00 0.00 68545.19 4436.28 68560.74 00:12:59.023 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:59.023 Verification LBA range: start 0x20000 length 0x20000 00:12:59.023 nvme3n1 : 5.09 1911.05 7.47 0.00 0.00 66000.82 2344.17 80659.69 00:12:59.023 [2024-10-13T17:42:48.837Z] =================================================================================================================== 00:12:59.023 [2024-10-13T17:42:48.837Z] Total : 23388.54 91.36 0.00 0.00 65103.28 2344.17 80659.69 00:12:59.967 00:12:59.967 real 0m6.868s 00:12:59.967 user 0m11.189s 00:12:59.967 sys 0m1.376s 00:12:59.967 ************************************ 00:12:59.967 END TEST bdev_verify 00:12:59.967 ************************************ 00:12:59.967 17:42:49 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:59.967 17:42:49 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:12:59.967 17:42:49 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:59.967 17:42:49 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:12:59.967 17:42:49 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:59.967 17:42:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:59.967 ************************************ 00:12:59.967 START TEST bdev_verify_big_io 00:12:59.967 ************************************ 00:12:59.967 17:42:49 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:00.228 [2024-10-13 17:42:49.811163] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:13:00.228 [2024-10-13 17:42:49.811324] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70693 ] 00:13:00.228 [2024-10-13 17:42:49.968233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:00.489 [2024-10-13 17:42:50.138924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:00.489 [2024-10-13 17:42:50.139042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.062 Running I/O for 5 seconds... 00:13:06.913 1076.00 IOPS, 67.25 MiB/s [2024-10-13T17:42:57.023Z] 2679.00 IOPS, 167.44 MiB/s [2024-10-13T17:42:57.023Z] 2946.67 IOPS, 184.17 MiB/s 00:13:07.209 Latency(us) 00:13:07.209 [2024-10-13T17:42:57.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:07.209 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:07.209 Verification LBA range: start 0x0 length 0xa000 00:13:07.209 nvme0n1 : 6.04 140.35 8.77 0.00 0.00 891541.66 16938.54 1129235.69 00:13:07.209 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:07.209 Verification LBA range: start 0xa000 length 0xa000 00:13:07.209 nvme0n1 : 6.02 107.61 6.73 0.00 0.00 1162752.72 94775.14 1013085.74 00:13:07.209 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:07.209 Verification LBA range: start 0x0 length 0xbd0b 00:13:07.209 nvme1n1 : 6.05 148.22 9.26 0.00 0.00 809205.79 11695.66 1038896.84 00:13:07.209 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:07.209 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:07.209 nvme1n1 : 6.00 112.62 7.04 0.00 0.00 1053725.74 13510.50 1703532.70 00:13:07.209 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:07.209 Verification LBA range: start 0x0 length 0x8000 00:13:07.209 nvme2n1 : 6.05 81.99 5.12 0.00 0.00 1412159.32 114536.76 2916654.47 00:13:07.209 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:07.209 Verification LBA range: start 0x8000 length 0x8000 00:13:07.209 nvme2n1 : 6.01 130.53 8.16 0.00 0.00 902374.13 109697.18 961463.53 00:13:07.209 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:07.209 Verification LBA range: start 0x0 length 0x8000 00:13:07.209 nvme2n2 : 6.06 139.97 8.75 0.00 0.00 797052.83 124215.93 803370.54 00:13:07.209 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:07.209 Verification LBA range: start 0x8000 length 0x8000 00:13:07.209 nvme2n2 : 6.01 138.46 8.65 0.00 0.00 826037.17 97194.93 1167952.34 00:13:07.209 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:07.209 Verification LBA range: start 0x0 length 0x8000 00:13:07.209 nvme2n3 : 6.07 115.70 7.23 0.00 0.00 939792.13 11897.30 2516582.40 00:13:07.209 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:07.209 Verification LBA range: start 0x8000 length 0x8000 00:13:07.209 nvme2n3 : 6.01 180.97 11.31 0.00 0.00 611216.19 13308.85 816276.09 00:13:07.209 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:07.209 Verification LBA range: start 0x0 length 0x2000 00:13:07.209 nvme3n1 : 6.07 98.82 6.18 0.00 0.00 1067855.51 14317.10 2168132.53 00:13:07.209 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:07.209 Verification LBA range: start 0x2000 length 0x2000 00:13:07.209 nvme3n1 : 6.02 151.53 9.47 0.00 0.00 713299.27 6452.78 1245385.65 00:13:07.209 [2024-10-13T17:42:57.023Z] =================================================================================================================== 00:13:07.209 [2024-10-13T17:42:57.023Z] Total : 1546.77 96.67 0.00 0.00 893236.19 6452.78 2916654.47 00:13:08.153 00:13:08.153 real 0m8.077s 00:13:08.153 user 0m14.569s 00:13:08.153 sys 0m0.603s 00:13:08.153 ************************************ 00:13:08.153 END TEST bdev_verify_big_io 00:13:08.153 ************************************ 00:13:08.153 17:42:57 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:08.153 17:42:57 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.153 17:42:57 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:08.153 17:42:57 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:08.153 17:42:57 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:08.153 17:42:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:08.153 ************************************ 00:13:08.153 START TEST bdev_write_zeroes 00:13:08.153 ************************************ 00:13:08.153 17:42:57 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:08.153 [2024-10-13 17:42:57.963199] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:13:08.153 [2024-10-13 17:42:57.963367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70808 ] 00:13:08.412 [2024-10-13 17:42:58.113605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.672 [2024-10-13 17:42:58.261582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.933 Running I/O for 1 seconds... 00:13:10.318 96352.00 IOPS, 376.38 MiB/s 00:13:10.319 Latency(us) 00:13:10.319 [2024-10-13T17:43:00.133Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:10.319 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:10.319 nvme0n1 : 1.02 15850.93 61.92 0.00 0.00 8065.77 5973.86 20366.57 00:13:10.319 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:10.319 nvme1n1 : 1.02 16853.41 65.83 0.00 0.00 7579.12 4612.73 15930.29 00:13:10.319 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:10.319 nvme2n1 : 1.02 15830.64 61.84 0.00 0.00 8000.08 5721.80 19257.50 00:13:10.319 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:10.319 nvme2n2 : 1.02 15812.68 61.77 0.00 0.00 8001.11 5696.59 19156.68 00:13:10.319 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:10.319 nvme2n3 : 1.02 15794.73 61.70 0.00 0.00 8000.29 5797.42 19055.85 00:13:10.319 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:10.319 nvme3n1 : 1.02 15776.73 61.63 0.00 0.00 8001.48 5721.80 18955.03 00:13:10.319 [2024-10-13T17:43:00.133Z] =================================================================================================================== 00:13:10.319 [2024-10-13T17:43:00.133Z] Total : 95919.12 374.68 0.00 0.00 7937.63 4612.73 20366.57 00:13:10.891 00:13:10.891 real 0m2.713s 00:13:10.891 user 0m1.985s 00:13:10.891 sys 0m0.542s 00:13:10.891 ************************************ 00:13:10.891 END TEST bdev_write_zeroes 00:13:10.891 ************************************ 00:13:10.891 17:43:00 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:10.891 17:43:00 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:10.891 17:43:00 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:10.891 17:43:00 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:10.891 17:43:00 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:10.891 17:43:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:10.891 ************************************ 00:13:10.891 START TEST bdev_json_nonenclosed 00:13:10.891 ************************************ 00:13:10.891 17:43:00 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:11.152 [2024-10-13 17:43:00.751494] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:13:11.152 [2024-10-13 17:43:00.752060] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70858 ] 00:13:11.152 [2024-10-13 17:43:00.909937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.413 [2024-10-13 17:43:01.063215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.413 [2024-10-13 17:43:01.063332] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:11.413 [2024-10-13 17:43:01.063354] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:11.413 [2024-10-13 17:43:01.063366] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:11.674 00:13:11.674 real 0m0.607s 00:13:11.674 user 0m0.362s 00:13:11.674 sys 0m0.137s 00:13:11.674 17:43:01 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:11.674 ************************************ 00:13:11.674 END TEST bdev_json_nonenclosed 00:13:11.674 ************************************ 00:13:11.674 17:43:01 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:11.675 17:43:01 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:11.675 17:43:01 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:11.675 17:43:01 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:11.675 17:43:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:11.675 ************************************ 00:13:11.675 START TEST bdev_json_nonarray 00:13:11.675 ************************************ 00:13:11.675 17:43:01 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:11.675 [2024-10-13 17:43:01.418293] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:13:11.675 [2024-10-13 17:43:01.418450] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70889 ] 00:13:11.935 [2024-10-13 17:43:01.577547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.935 [2024-10-13 17:43:01.723955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.935 [2024-10-13 17:43:01.724092] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:11.935 [2024-10-13 17:43:01.724114] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:11.935 [2024-10-13 17:43:01.724126] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:12.197 00:13:12.197 real 0m0.597s 00:13:12.197 user 0m0.359s 00:13:12.197 sys 0m0.131s 00:13:12.197 17:43:01 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:12.197 ************************************ 00:13:12.197 END TEST bdev_json_nonarray 00:13:12.197 ************************************ 00:13:12.197 17:43:01 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:12.197 17:43:01 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:13:12.197 17:43:01 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:13:12.197 17:43:01 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:13:12.197 17:43:01 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:12.197 17:43:01 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:12.197 17:43:01 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:12.197 17:43:01 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:12.197 17:43:02 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:12.197 17:43:02 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:12.197 17:43:02 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:12.197 17:43:02 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:12.197 17:43:02 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:12.770 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:18.060 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:19.447 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:19.447 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:19.447 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:19.709 00:13:19.709 real 1m1.250s 00:13:19.709 user 1m24.933s 00:13:19.709 sys 0m38.855s 00:13:19.709 17:43:09 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:19.709 ************************************ 00:13:19.709 17:43:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:19.709 END TEST blockdev_xnvme 00:13:19.709 ************************************ 00:13:19.709 17:43:09 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:19.709 17:43:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:19.709 17:43:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:19.709 17:43:09 -- common/autotest_common.sh@10 -- # set +x 00:13:19.709 ************************************ 00:13:19.709 START TEST ublk 00:13:19.709 ************************************ 00:13:19.709 17:43:09 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:19.709 * Looking for test storage... 00:13:19.709 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:19.709 17:43:09 ublk -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:19.709 17:43:09 ublk -- common/autotest_common.sh@1691 -- # lcov --version 00:13:19.709 17:43:09 ublk -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:19.709 17:43:09 ublk -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:19.709 17:43:09 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:19.709 17:43:09 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:19.709 17:43:09 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:19.709 17:43:09 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:13:19.709 17:43:09 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:13:19.709 17:43:09 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:13:19.709 17:43:09 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:13:19.709 17:43:09 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:13:19.709 17:43:09 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:13:19.709 17:43:09 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:13:19.709 17:43:09 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:19.709 17:43:09 ublk -- scripts/common.sh@344 -- # case "$op" in 00:13:19.709 17:43:09 ublk -- scripts/common.sh@345 -- # : 1 00:13:19.709 17:43:09 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:19.709 17:43:09 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:19.709 17:43:09 ublk -- scripts/common.sh@365 -- # decimal 1 00:13:19.709 17:43:09 ublk -- scripts/common.sh@353 -- # local d=1 00:13:19.709 17:43:09 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:19.709 17:43:09 ublk -- scripts/common.sh@355 -- # echo 1 00:13:19.709 17:43:09 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:13:19.709 17:43:09 ublk -- scripts/common.sh@366 -- # decimal 2 00:13:19.709 17:43:09 ublk -- scripts/common.sh@353 -- # local d=2 00:13:19.709 17:43:09 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:19.709 17:43:09 ublk -- scripts/common.sh@355 -- # echo 2 00:13:19.709 17:43:09 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:13:19.709 17:43:09 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:19.709 17:43:09 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:19.709 17:43:09 ublk -- scripts/common.sh@368 -- # return 0 00:13:19.709 17:43:09 ublk -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:19.709 17:43:09 ublk -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:19.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.709 --rc genhtml_branch_coverage=1 00:13:19.709 --rc genhtml_function_coverage=1 00:13:19.709 --rc genhtml_legend=1 00:13:19.709 --rc geninfo_all_blocks=1 00:13:19.709 --rc geninfo_unexecuted_blocks=1 00:13:19.709 00:13:19.709 ' 00:13:19.709 17:43:09 ublk -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:19.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.709 --rc genhtml_branch_coverage=1 00:13:19.709 --rc genhtml_function_coverage=1 00:13:19.709 --rc genhtml_legend=1 00:13:19.709 --rc geninfo_all_blocks=1 00:13:19.709 --rc geninfo_unexecuted_blocks=1 00:13:19.709 00:13:19.709 ' 00:13:19.709 17:43:09 ublk -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:19.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.709 --rc genhtml_branch_coverage=1 00:13:19.709 --rc genhtml_function_coverage=1 00:13:19.709 --rc genhtml_legend=1 00:13:19.709 --rc geninfo_all_blocks=1 00:13:19.709 --rc geninfo_unexecuted_blocks=1 00:13:19.709 00:13:19.709 ' 00:13:19.709 17:43:09 ublk -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:19.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.709 --rc genhtml_branch_coverage=1 00:13:19.709 --rc genhtml_function_coverage=1 00:13:19.709 --rc genhtml_legend=1 00:13:19.709 --rc geninfo_all_blocks=1 00:13:19.709 --rc geninfo_unexecuted_blocks=1 00:13:19.709 00:13:19.709 ' 00:13:19.709 17:43:09 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:19.709 17:43:09 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:19.709 17:43:09 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:19.709 17:43:09 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:19.709 17:43:09 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:19.709 17:43:09 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:19.709 17:43:09 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:19.709 17:43:09 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:19.709 17:43:09 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:19.709 17:43:09 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:19.709 17:43:09 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:19.709 17:43:09 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:19.709 17:43:09 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:19.709 17:43:09 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:19.709 17:43:09 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:19.709 17:43:09 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:19.709 17:43:09 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:19.709 17:43:09 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:19.709 17:43:09 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:19.971 17:43:09 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:19.971 17:43:09 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:19.971 17:43:09 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:19.971 17:43:09 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:19.971 ************************************ 00:13:19.971 START TEST test_save_ublk_config 00:13:19.971 ************************************ 00:13:19.971 17:43:09 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:13:19.971 17:43:09 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:19.971 17:43:09 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=71189 00:13:19.971 17:43:09 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:19.971 17:43:09 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 71189 00:13:19.971 17:43:09 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:19.971 17:43:09 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 71189 ']' 00:13:19.971 17:43:09 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.971 17:43:09 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:19.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.971 17:43:09 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.971 17:43:09 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:19.971 17:43:09 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:19.971 [2024-10-13 17:43:09.648492] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:13:19.971 [2024-10-13 17:43:09.648682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71189 ] 00:13:20.233 [2024-10-13 17:43:09.806958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.233 [2024-10-13 17:43:09.953691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.177 17:43:10 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:21.177 17:43:10 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:13:21.177 17:43:10 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:21.177 17:43:10 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:21.177 17:43:10 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.177 17:43:10 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:21.177 [2024-10-13 17:43:10.792600] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:21.177 [2024-10-13 17:43:10.793594] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:21.177 malloc0 00:13:21.177 [2024-10-13 17:43:10.871736] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:21.177 [2024-10-13 17:43:10.871861] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:21.177 [2024-10-13 17:43:10.871873] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:21.177 [2024-10-13 17:43:10.871883] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:21.177 [2024-10-13 17:43:10.880727] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:21.177 [2024-10-13 17:43:10.880769] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:21.177 [2024-10-13 17:43:10.886585] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:21.177 [2024-10-13 17:43:10.886731] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:21.177 [2024-10-13 17:43:10.904598] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:21.177 0 00:13:21.177 17:43:10 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.177 17:43:10 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:21.177 17:43:10 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.177 17:43:10 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:21.439 17:43:11 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.439 17:43:11 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:13:21.439 "subsystems": [ 00:13:21.439 { 00:13:21.439 "subsystem": "fsdev", 00:13:21.439 "config": [ 00:13:21.439 { 00:13:21.439 "method": "fsdev_set_opts", 00:13:21.439 "params": { 00:13:21.439 "fsdev_io_pool_size": 65535, 00:13:21.439 "fsdev_io_cache_size": 256 00:13:21.439 } 00:13:21.439 } 00:13:21.439 ] 00:13:21.439 }, 00:13:21.439 { 00:13:21.439 "subsystem": "keyring", 00:13:21.439 "config": [] 00:13:21.439 }, 00:13:21.439 { 00:13:21.439 "subsystem": "iobuf", 00:13:21.439 "config": [ 00:13:21.439 { 00:13:21.439 "method": "iobuf_set_options", 00:13:21.439 "params": { 00:13:21.439 "small_pool_count": 8192, 00:13:21.439 "large_pool_count": 1024, 00:13:21.439 "small_bufsize": 8192, 00:13:21.439 "large_bufsize": 135168 00:13:21.439 } 00:13:21.439 } 00:13:21.439 ] 00:13:21.439 }, 00:13:21.439 { 00:13:21.439 "subsystem": "sock", 00:13:21.439 "config": [ 00:13:21.439 { 00:13:21.439 "method": "sock_set_default_impl", 00:13:21.439 "params": { 00:13:21.439 "impl_name": "posix" 00:13:21.439 } 00:13:21.439 }, 00:13:21.439 { 00:13:21.439 "method": "sock_impl_set_options", 00:13:21.439 "params": { 00:13:21.439 "impl_name": "ssl", 00:13:21.439 "recv_buf_size": 4096, 00:13:21.439 "send_buf_size": 4096, 00:13:21.439 "enable_recv_pipe": true, 00:13:21.439 "enable_quickack": false, 00:13:21.439 "enable_placement_id": 0, 00:13:21.439 "enable_zerocopy_send_server": true, 00:13:21.439 "enable_zerocopy_send_client": false, 00:13:21.439 "zerocopy_threshold": 0, 00:13:21.439 "tls_version": 0, 00:13:21.439 "enable_ktls": false 00:13:21.439 } 00:13:21.439 }, 00:13:21.439 { 00:13:21.439 "method": "sock_impl_set_options", 00:13:21.439 "params": { 00:13:21.439 "impl_name": "posix", 00:13:21.439 "recv_buf_size": 2097152, 00:13:21.439 "send_buf_size": 2097152, 00:13:21.439 "enable_recv_pipe": true, 00:13:21.439 "enable_quickack": false, 00:13:21.439 "enable_placement_id": 0, 00:13:21.439 "enable_zerocopy_send_server": true, 00:13:21.439 "enable_zerocopy_send_client": false, 00:13:21.439 "zerocopy_threshold": 0, 00:13:21.439 "tls_version": 0, 00:13:21.439 "enable_ktls": false 00:13:21.439 } 00:13:21.439 } 00:13:21.439 ] 00:13:21.439 }, 00:13:21.439 { 00:13:21.439 "subsystem": "vmd", 00:13:21.439 "config": [] 00:13:21.439 }, 00:13:21.439 { 00:13:21.439 "subsystem": "accel", 00:13:21.439 "config": [ 00:13:21.439 { 00:13:21.439 "method": "accel_set_options", 00:13:21.439 "params": { 00:13:21.439 "small_cache_size": 128, 00:13:21.439 "large_cache_size": 16, 00:13:21.439 "task_count": 2048, 00:13:21.439 "sequence_count": 2048, 00:13:21.439 "buf_count": 2048 00:13:21.439 } 00:13:21.439 } 00:13:21.439 ] 00:13:21.439 }, 00:13:21.439 { 00:13:21.439 "subsystem": "bdev", 00:13:21.439 "config": [ 00:13:21.439 { 00:13:21.439 "method": "bdev_set_options", 00:13:21.439 "params": { 00:13:21.439 "bdev_io_pool_size": 65535, 00:13:21.439 "bdev_io_cache_size": 256, 00:13:21.439 "bdev_auto_examine": true, 00:13:21.439 "iobuf_small_cache_size": 128, 00:13:21.439 "iobuf_large_cache_size": 16 00:13:21.439 } 00:13:21.439 }, 00:13:21.439 { 00:13:21.439 "method": "bdev_raid_set_options", 00:13:21.439 "params": { 00:13:21.439 "process_window_size_kb": 1024, 00:13:21.439 "process_max_bandwidth_mb_sec": 0 00:13:21.439 } 00:13:21.439 }, 00:13:21.439 { 00:13:21.439 "method": "bdev_iscsi_set_options", 00:13:21.439 "params": { 00:13:21.439 "timeout_sec": 30 00:13:21.439 } 00:13:21.439 }, 00:13:21.439 { 00:13:21.439 "method": "bdev_nvme_set_options", 00:13:21.439 "params": { 00:13:21.439 "action_on_timeout": "none", 00:13:21.439 "timeout_us": 0, 00:13:21.439 "timeout_admin_us": 0, 00:13:21.439 "keep_alive_timeout_ms": 10000, 00:13:21.439 "arbitration_burst": 0, 00:13:21.439 "low_priority_weight": 0, 00:13:21.439 "medium_priority_weight": 0, 00:13:21.439 "high_priority_weight": 0, 00:13:21.439 "nvme_adminq_poll_period_us": 10000, 00:13:21.439 "nvme_ioq_poll_period_us": 0, 00:13:21.439 "io_queue_requests": 0, 00:13:21.439 "delay_cmd_submit": true, 00:13:21.439 "transport_retry_count": 4, 00:13:21.439 "bdev_retry_count": 3, 00:13:21.439 "transport_ack_timeout": 0, 00:13:21.439 "ctrlr_loss_timeout_sec": 0, 00:13:21.439 "reconnect_delay_sec": 0, 00:13:21.439 "fast_io_fail_timeout_sec": 0, 00:13:21.439 "disable_auto_failback": false, 00:13:21.439 "generate_uuids": false, 00:13:21.439 "transport_tos": 0, 00:13:21.439 "nvme_error_stat": false, 00:13:21.439 "rdma_srq_size": 0, 00:13:21.439 "io_path_stat": false, 00:13:21.439 "allow_accel_sequence": false, 00:13:21.439 "rdma_max_cq_size": 0, 00:13:21.439 "rdma_cm_event_timeout_ms": 0, 00:13:21.439 "dhchap_digests": [ 00:13:21.439 "sha256", 00:13:21.439 "sha384", 00:13:21.439 "sha512" 00:13:21.439 ], 00:13:21.439 "dhchap_dhgroups": [ 00:13:21.439 "null", 00:13:21.439 "ffdhe2048", 00:13:21.439 "ffdhe3072", 00:13:21.439 "ffdhe4096", 00:13:21.439 "ffdhe6144", 00:13:21.439 "ffdhe8192" 00:13:21.439 ] 00:13:21.439 } 00:13:21.439 }, 00:13:21.439 { 00:13:21.439 "method": "bdev_nvme_set_hotplug", 00:13:21.439 "params": { 00:13:21.439 "period_us": 100000, 00:13:21.439 "enable": false 00:13:21.439 } 00:13:21.439 }, 00:13:21.439 { 00:13:21.439 "method": "bdev_malloc_create", 00:13:21.439 "params": { 00:13:21.439 "name": "malloc0", 00:13:21.439 "num_blocks": 8192, 00:13:21.439 "block_size": 4096, 00:13:21.439 "physical_block_size": 4096, 00:13:21.439 "uuid": "6ea0849f-832b-44c2-99b1-5f1157201770", 00:13:21.439 "optimal_io_boundary": 0, 00:13:21.439 "md_size": 0, 00:13:21.440 "dif_type": 0, 00:13:21.440 "dif_is_head_of_md": false, 00:13:21.440 "dif_pi_format": 0 00:13:21.440 } 00:13:21.440 }, 00:13:21.440 { 00:13:21.440 "method": "bdev_wait_for_examine" 00:13:21.440 } 00:13:21.440 ] 00:13:21.440 }, 00:13:21.440 { 00:13:21.440 "subsystem": "scsi", 00:13:21.440 "config": null 00:13:21.440 }, 00:13:21.440 { 00:13:21.440 "subsystem": "scheduler", 00:13:21.440 "config": [ 00:13:21.440 { 00:13:21.440 "method": "framework_set_scheduler", 00:13:21.440 "params": { 00:13:21.440 "name": "static" 00:13:21.440 } 00:13:21.440 } 00:13:21.440 ] 00:13:21.440 }, 00:13:21.440 { 00:13:21.440 "subsystem": "vhost_scsi", 00:13:21.440 "config": [] 00:13:21.440 }, 00:13:21.440 { 00:13:21.440 "subsystem": "vhost_blk", 00:13:21.440 "config": [] 00:13:21.440 }, 00:13:21.440 { 00:13:21.440 "subsystem": "ublk", 00:13:21.440 "config": [ 00:13:21.440 { 00:13:21.440 "method": "ublk_create_target", 00:13:21.440 "params": { 00:13:21.440 "cpumask": "1" 00:13:21.440 } 00:13:21.440 }, 00:13:21.440 { 00:13:21.440 "method": "ublk_start_disk", 00:13:21.440 "params": { 00:13:21.440 "bdev_name": "malloc0", 00:13:21.440 "ublk_id": 0, 00:13:21.440 "num_queues": 1, 00:13:21.440 "queue_depth": 128 00:13:21.440 } 00:13:21.440 } 00:13:21.440 ] 00:13:21.440 }, 00:13:21.440 { 00:13:21.440 "subsystem": "nbd", 00:13:21.440 "config": [] 00:13:21.440 }, 00:13:21.440 { 00:13:21.440 "subsystem": "nvmf", 00:13:21.440 "config": [ 00:13:21.440 { 00:13:21.440 "method": "nvmf_set_config", 00:13:21.440 "params": { 00:13:21.440 "discovery_filter": "match_any", 00:13:21.440 "admin_cmd_passthru": { 00:13:21.440 "identify_ctrlr": false 00:13:21.440 }, 00:13:21.440 "dhchap_digests": [ 00:13:21.440 "sha256", 00:13:21.440 "sha384", 00:13:21.440 "sha512" 00:13:21.440 ], 00:13:21.440 "dhchap_dhgroups": [ 00:13:21.440 "null", 00:13:21.440 "ffdhe2048", 00:13:21.440 "ffdhe3072", 00:13:21.440 "ffdhe4096", 00:13:21.440 "ffdhe6144", 00:13:21.440 "ffdhe8192" 00:13:21.440 ] 00:13:21.440 } 00:13:21.440 }, 00:13:21.440 { 00:13:21.440 "method": "nvmf_set_max_subsystems", 00:13:21.440 "params": { 00:13:21.440 "max_subsystems": 1024 00:13:21.440 } 00:13:21.440 }, 00:13:21.440 { 00:13:21.440 "method": "nvmf_set_crdt", 00:13:21.440 "params": { 00:13:21.440 "crdt1": 0, 00:13:21.440 "crdt2": 0, 00:13:21.440 "crdt3": 0 00:13:21.440 } 00:13:21.440 } 00:13:21.440 ] 00:13:21.440 }, 00:13:21.440 { 00:13:21.440 "subsystem": "iscsi", 00:13:21.440 "config": [ 00:13:21.440 { 00:13:21.440 "method": "iscsi_set_options", 00:13:21.440 "params": { 00:13:21.440 "node_base": "iqn.2016-06.io.spdk", 00:13:21.440 "max_sessions": 128, 00:13:21.440 "max_connections_per_session": 2, 00:13:21.440 "max_queue_depth": 64, 00:13:21.440 "default_time2wait": 2, 00:13:21.440 "default_time2retain": 20, 00:13:21.440 "first_burst_length": 8192, 00:13:21.440 "immediate_data": true, 00:13:21.440 "allow_duplicated_isid": false, 00:13:21.440 "error_recovery_level": 0, 00:13:21.440 "nop_timeout": 60, 00:13:21.440 "nop_in_interval": 30, 00:13:21.440 "disable_chap": false, 00:13:21.440 "require_chap": false, 00:13:21.440 "mutual_chap": false, 00:13:21.440 "chap_group": 0, 00:13:21.440 "max_large_datain_per_connection": 64, 00:13:21.440 "max_r2t_per_connection": 4, 00:13:21.440 "pdu_pool_size": 36864, 00:13:21.440 "immediate_data_pool_size": 16384, 00:13:21.440 "data_out_pool_size": 2048 00:13:21.440 } 00:13:21.440 } 00:13:21.440 ] 00:13:21.440 } 00:13:21.440 ] 00:13:21.440 }' 00:13:21.440 17:43:11 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 71189 00:13:21.440 17:43:11 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 71189 ']' 00:13:21.440 17:43:11 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 71189 00:13:21.440 17:43:11 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:13:21.440 17:43:11 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:21.440 17:43:11 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71189 00:13:21.440 17:43:11 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:21.440 killing process with pid 71189 00:13:21.440 17:43:11 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:21.440 17:43:11 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71189' 00:13:21.440 17:43:11 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 71189 00:13:21.440 17:43:11 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 71189 00:13:22.826 [2024-10-13 17:43:12.408702] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:22.826 [2024-10-13 17:43:12.447741] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:22.826 [2024-10-13 17:43:12.447907] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:22.826 [2024-10-13 17:43:12.459594] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:22.826 [2024-10-13 17:43:12.459670] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:22.826 [2024-10-13 17:43:12.459687] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:22.826 [2024-10-13 17:43:12.459732] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:22.826 [2024-10-13 17:43:12.459919] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:24.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.210 17:43:13 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=71251 00:13:24.210 17:43:13 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 71251 00:13:24.210 17:43:13 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 71251 ']' 00:13:24.210 17:43:13 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.210 17:43:13 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:24.210 17:43:13 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.210 17:43:13 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:24.210 17:43:13 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:24.210 17:43:13 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:24.210 17:43:13 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:13:24.210 "subsystems": [ 00:13:24.210 { 00:13:24.210 "subsystem": "fsdev", 00:13:24.210 "config": [ 00:13:24.210 { 00:13:24.210 "method": "fsdev_set_opts", 00:13:24.210 "params": { 00:13:24.210 "fsdev_io_pool_size": 65535, 00:13:24.210 "fsdev_io_cache_size": 256 00:13:24.210 } 00:13:24.210 } 00:13:24.210 ] 00:13:24.210 }, 00:13:24.210 { 00:13:24.210 "subsystem": "keyring", 00:13:24.210 "config": [] 00:13:24.210 }, 00:13:24.210 { 00:13:24.210 "subsystem": "iobuf", 00:13:24.210 "config": [ 00:13:24.210 { 00:13:24.210 "method": "iobuf_set_options", 00:13:24.210 "params": { 00:13:24.210 "small_pool_count": 8192, 00:13:24.210 "large_pool_count": 1024, 00:13:24.210 "small_bufsize": 8192, 00:13:24.210 "large_bufsize": 135168 00:13:24.210 } 00:13:24.210 } 00:13:24.210 ] 00:13:24.210 }, 00:13:24.210 { 00:13:24.210 "subsystem": "sock", 00:13:24.210 "config": [ 00:13:24.210 { 00:13:24.210 "method": "sock_set_default_impl", 00:13:24.210 "params": { 00:13:24.210 "impl_name": "posix" 00:13:24.210 } 00:13:24.210 }, 00:13:24.210 { 00:13:24.210 "method": "sock_impl_set_options", 00:13:24.210 "params": { 00:13:24.210 "impl_name": "ssl", 00:13:24.210 "recv_buf_size": 4096, 00:13:24.210 "send_buf_size": 4096, 00:13:24.210 "enable_recv_pipe": true, 00:13:24.210 "enable_quickack": false, 00:13:24.210 "enable_placement_id": 0, 00:13:24.210 "enable_zerocopy_send_server": true, 00:13:24.210 "enable_zerocopy_send_client": false, 00:13:24.210 "zerocopy_threshold": 0, 00:13:24.210 "tls_version": 0, 00:13:24.210 "enable_ktls": false 00:13:24.210 } 00:13:24.210 }, 00:13:24.210 { 00:13:24.210 "method": "sock_impl_set_options", 00:13:24.210 "params": { 00:13:24.210 "impl_name": "posix", 00:13:24.210 "recv_buf_size": 2097152, 00:13:24.210 "send_buf_size": 2097152, 00:13:24.210 "enable_recv_pipe": true, 00:13:24.210 "enable_quickack": false, 00:13:24.210 "enable_placement_id": 0, 00:13:24.210 "enable_zerocopy_send_server": true, 00:13:24.210 "enable_zerocopy_send_client": false, 00:13:24.210 "zerocopy_threshold": 0, 00:13:24.210 "tls_version": 0, 00:13:24.210 "enable_ktls": false 00:13:24.210 } 00:13:24.210 } 00:13:24.210 ] 00:13:24.210 }, 00:13:24.210 { 00:13:24.210 "subsystem": "vmd", 00:13:24.210 "config": [] 00:13:24.210 }, 00:13:24.210 { 00:13:24.210 "subsystem": "accel", 00:13:24.210 "config": [ 00:13:24.210 { 00:13:24.210 "method": "accel_set_options", 00:13:24.210 "params": { 00:13:24.210 "small_cache_size": 128, 00:13:24.210 "large_cache_size": 16, 00:13:24.210 "task_count": 2048, 00:13:24.210 "sequence_count": 2048, 00:13:24.210 "buf_count": 2048 00:13:24.210 } 00:13:24.210 } 00:13:24.210 ] 00:13:24.210 }, 00:13:24.210 { 00:13:24.210 "subsystem": "bdev", 00:13:24.210 "config": [ 00:13:24.210 { 00:13:24.210 "method": "bdev_set_options", 00:13:24.210 "params": { 00:13:24.210 "bdev_io_pool_size": 65535, 00:13:24.210 "bdev_io_cache_size": 256, 00:13:24.210 "bdev_auto_examine": true, 00:13:24.210 "iobuf_small_cache_size": 128, 00:13:24.210 "iobuf_large_cache_size": 16 00:13:24.210 } 00:13:24.210 }, 00:13:24.210 { 00:13:24.210 "method": "bdev_raid_set_options", 00:13:24.210 "params": { 00:13:24.210 "process_window_size_kb": 1024, 00:13:24.210 "process_max_bandwidth_mb_sec": 0 00:13:24.210 } 00:13:24.210 }, 00:13:24.210 { 00:13:24.210 "method": "bdev_iscsi_set_options", 00:13:24.210 "params": { 00:13:24.211 "timeout_sec": 30 00:13:24.211 } 00:13:24.211 }, 00:13:24.211 { 00:13:24.211 "method": "bdev_nvme_set_options", 00:13:24.211 "params": { 00:13:24.211 "action_on_timeout": "none", 00:13:24.211 "timeout_us": 0, 00:13:24.211 "timeout_admin_us": 0, 00:13:24.211 "keep_alive_timeout_ms": 10000, 00:13:24.211 "arbitration_burst": 0, 00:13:24.211 "low_priority_weight": 0, 00:13:24.211 "medium_priority_weight": 0, 00:13:24.211 "high_priority_weight": 0, 00:13:24.211 "nvme_adminq_poll_period_us": 10000, 00:13:24.211 "nvme_ioq_poll_period_us": 0, 00:13:24.211 "io_queue_requests": 0, 00:13:24.211 "delay_cmd_submit": true, 00:13:24.211 "transport_retry_count": 4, 00:13:24.211 "bdev_retry_count": 3, 00:13:24.211 "transport_ack_timeout": 0, 00:13:24.211 "ctrlr_loss_timeout_sec": 0, 00:13:24.211 "reconnect_delay_sec": 0, 00:13:24.211 "fast_io_fail_timeout_sec": 0, 00:13:24.211 "disable_auto_failback": false, 00:13:24.211 "generate_uuids": false, 00:13:24.211 "transport_tos": 0, 00:13:24.211 "nvme_error_stat": false, 00:13:24.211 "rdma_srq_size": 0, 00:13:24.211 "io_path_stat": false, 00:13:24.211 "allow_accel_sequence": false, 00:13:24.211 "rdma_max_cq_size": 0, 00:13:24.211 "rdma_cm_event_timeout_ms": 0, 00:13:24.211 "dhchap_digests": [ 00:13:24.211 "sha256", 00:13:24.211 "sha384", 00:13:24.211 "sha512" 00:13:24.211 ], 00:13:24.211 "dhchap_dhgroups": [ 00:13:24.211 "null", 00:13:24.211 "ffdhe2048", 00:13:24.211 "ffdhe3072", 00:13:24.211 "ffdhe4096", 00:13:24.211 "ffdhe6144", 00:13:24.211 "ffdhe8192" 00:13:24.211 ] 00:13:24.211 } 00:13:24.211 }, 00:13:24.211 { 00:13:24.211 "method": "bdev_nvme_set_hotplug", 00:13:24.211 "params": { 00:13:24.211 "period_us": 100000, 00:13:24.211 "enable": false 00:13:24.211 } 00:13:24.211 }, 00:13:24.211 { 00:13:24.211 "method": "bdev_malloc_create", 00:13:24.211 "params": { 00:13:24.211 "name": "malloc0", 00:13:24.211 "num_blocks": 8192, 00:13:24.211 "block_size": 4096, 00:13:24.211 "physical_block_size": 4096, 00:13:24.211 "uuid": "6ea0849f-832b-44c2-99b1-5f1157201770", 00:13:24.211 "optimal_io_boundary": 0, 00:13:24.211 "md_size": 0, 00:13:24.211 "dif_type": 0, 00:13:24.211 "dif_is_head_of_md": false, 00:13:24.211 "dif_pi_format": 0 00:13:24.211 } 00:13:24.211 }, 00:13:24.211 { 00:13:24.211 "method": "bdev_wait_for_examine" 00:13:24.211 } 00:13:24.211 ] 00:13:24.211 }, 00:13:24.211 { 00:13:24.211 "subsystem": "scsi", 00:13:24.211 "config": null 00:13:24.211 }, 00:13:24.211 { 00:13:24.211 "subsystem": "scheduler", 00:13:24.211 "config": [ 00:13:24.211 { 00:13:24.211 "method": "framework_set_scheduler", 00:13:24.211 "params": { 00:13:24.211 "name": "static" 00:13:24.211 } 00:13:24.211 } 00:13:24.211 ] 00:13:24.211 }, 00:13:24.211 { 00:13:24.211 "subsystem": "vhost_scsi", 00:13:24.211 "config": [] 00:13:24.211 }, 00:13:24.211 { 00:13:24.211 "subsystem": "vhost_blk", 00:13:24.211 "config": [] 00:13:24.211 }, 00:13:24.211 { 00:13:24.211 "subsystem": "ublk", 00:13:24.211 "config": [ 00:13:24.211 { 00:13:24.211 "method": "ublk_create_target", 00:13:24.211 "params": { 00:13:24.211 "cpumask": "1" 00:13:24.211 } 00:13:24.211 }, 00:13:24.211 { 00:13:24.211 "method": "ublk_start_disk", 00:13:24.211 "params": { 00:13:24.211 "bdev_name": "malloc0", 00:13:24.211 "ublk_id": 0, 00:13:24.211 "num_queues": 1, 00:13:24.211 "queue_depth": 128 00:13:24.211 } 00:13:24.211 } 00:13:24.211 ] 00:13:24.211 }, 00:13:24.211 { 00:13:24.211 "subsystem": "nbd", 00:13:24.211 "config": [] 00:13:24.211 }, 00:13:24.211 { 00:13:24.211 "subsystem": "nvmf", 00:13:24.211 "config": [ 00:13:24.211 { 00:13:24.211 "method": "nvmf_set_config", 00:13:24.211 "params": { 00:13:24.211 "discovery_filter": "match_any", 00:13:24.211 "admin_cmd_passthru": { 00:13:24.211 "identify_ctrlr": false 00:13:24.211 }, 00:13:24.211 "dhchap_digests": [ 00:13:24.211 "sha256", 00:13:24.211 "sha384", 00:13:24.211 "sha512" 00:13:24.211 ], 00:13:24.211 "dhchap_dhgroups": [ 00:13:24.211 "null", 00:13:24.211 "ffdhe2048", 00:13:24.211 "ffdhe3072", 00:13:24.211 "ffdhe4096", 00:13:24.211 "ffdhe6144", 00:13:24.211 "ffdhe8192" 00:13:24.211 ] 00:13:24.211 } 00:13:24.211 }, 00:13:24.211 { 00:13:24.211 "method": "nvmf_set_max_subsystems", 00:13:24.211 "params": { 00:13:24.211 "max_subsystems": 1024 00:13:24.211 } 00:13:24.211 }, 00:13:24.211 { 00:13:24.211 "method": "nvmf_set_crdt", 00:13:24.211 "params": { 00:13:24.211 "crdt1": 0, 00:13:24.211 "crdt2": 0, 00:13:24.211 "crdt3": 0 00:13:24.211 } 00:13:24.211 } 00:13:24.211 ] 00:13:24.211 }, 00:13:24.211 { 00:13:24.211 "subsystem": "iscsi", 00:13:24.211 "config": [ 00:13:24.211 { 00:13:24.211 "method": "iscsi_set_options", 00:13:24.211 "params": { 00:13:24.211 "node_base": "iqn.2016-06.io.spdk", 00:13:24.211 "max_sessions": 128, 00:13:24.211 "max_connections_per_session": 2, 00:13:24.211 "max_queue_depth": 64, 00:13:24.211 "default_time2wait": 2, 00:13:24.211 "default_time2retain": 20, 00:13:24.211 "first_burst_length": 8192, 00:13:24.211 "immediate_data": true, 00:13:24.211 "allow_duplicated_isid": false, 00:13:24.211 "error_recovery_level": 0, 00:13:24.211 "nop_timeout": 60, 00:13:24.211 "nop_in_interval": 30, 00:13:24.211 "disable_chap": false, 00:13:24.211 "require_chap": false, 00:13:24.211 "mutual_chap": false, 00:13:24.211 "chap_group": 0, 00:13:24.211 "max_large_datain_per_connection": 64, 00:13:24.211 "max_r2t_per_connection": 4, 00:13:24.211 "pdu_pool_size": 36864, 00:13:24.211 "immediate_data_pool_size": 16384, 00:13:24.211 "data_out_pool_size": 2048 00:13:24.211 } 00:13:24.211 } 00:13:24.211 ] 00:13:24.211 } 00:13:24.211 ] 00:13:24.211 }' 00:13:24.211 [2024-10-13 17:43:13.832446] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:13:24.211 [2024-10-13 17:43:13.832589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71251 ] 00:13:24.211 [2024-10-13 17:43:13.983149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.472 [2024-10-13 17:43:14.084193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.043 [2024-10-13 17:43:14.779577] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:25.043 [2024-10-13 17:43:14.780271] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:25.043 [2024-10-13 17:43:14.787668] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:25.043 [2024-10-13 17:43:14.787734] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:25.043 [2024-10-13 17:43:14.787740] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:25.043 [2024-10-13 17:43:14.787749] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:25.043 [2024-10-13 17:43:14.796645] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:25.043 [2024-10-13 17:43:14.796665] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:25.043 [2024-10-13 17:43:14.803575] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:25.043 [2024-10-13 17:43:14.803657] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:25.043 [2024-10-13 17:43:14.820573] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:25.304 17:43:14 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:25.304 17:43:14 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:13:25.304 17:43:14 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:25.304 17:43:14 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:25.304 17:43:14 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.304 17:43:14 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:25.304 17:43:14 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.304 17:43:14 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:25.304 17:43:14 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:25.304 17:43:14 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 71251 00:13:25.304 17:43:14 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 71251 ']' 00:13:25.304 17:43:14 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 71251 00:13:25.304 17:43:14 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:13:25.304 17:43:14 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:25.304 17:43:14 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71251 00:13:25.304 17:43:14 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:25.304 17:43:14 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:25.304 killing process with pid 71251 00:13:25.304 17:43:14 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71251' 00:13:25.304 17:43:14 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 71251 00:13:25.304 17:43:14 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 71251 00:13:26.309 [2024-10-13 17:43:15.947879] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:26.309 [2024-10-13 17:43:15.974592] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:26.309 [2024-10-13 17:43:15.974695] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:26.310 [2024-10-13 17:43:15.982586] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:26.310 [2024-10-13 17:43:15.982631] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:26.310 [2024-10-13 17:43:15.982637] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:26.310 [2024-10-13 17:43:15.982665] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:26.310 [2024-10-13 17:43:15.982792] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:27.696 17:43:17 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:27.696 00:13:27.696 real 0m7.851s 00:13:27.696 user 0m5.254s 00:13:27.696 sys 0m3.259s 00:13:27.696 17:43:17 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:27.696 17:43:17 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:27.696 ************************************ 00:13:27.696 END TEST test_save_ublk_config 00:13:27.696 ************************************ 00:13:27.696 17:43:17 ublk -- ublk/ublk.sh@139 -- # spdk_pid=71325 00:13:27.696 17:43:17 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:27.696 17:43:17 ublk -- ublk/ublk.sh@141 -- # waitforlisten 71325 00:13:27.696 17:43:17 ublk -- common/autotest_common.sh@831 -- # '[' -z 71325 ']' 00:13:27.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.696 17:43:17 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.696 17:43:17 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:27.696 17:43:17 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:27.696 17:43:17 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.696 17:43:17 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:27.696 17:43:17 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:27.957 [2024-10-13 17:43:17.517421] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:13:27.957 [2024-10-13 17:43:17.517931] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71325 ] 00:13:27.957 [2024-10-13 17:43:17.671050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:28.218 [2024-10-13 17:43:17.816787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.218 [2024-10-13 17:43:17.816867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.789 17:43:18 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:28.789 17:43:18 ublk -- common/autotest_common.sh@864 -- # return 0 00:13:28.789 17:43:18 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:28.789 17:43:18 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:28.789 17:43:18 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:28.789 17:43:18 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:28.789 ************************************ 00:13:28.789 START TEST test_create_ublk 00:13:28.789 ************************************ 00:13:28.789 17:43:18 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:13:28.789 17:43:18 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:28.790 17:43:18 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.790 17:43:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:28.790 [2024-10-13 17:43:18.567579] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:28.790 [2024-10-13 17:43:18.569693] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:28.790 17:43:18 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.790 17:43:18 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:13:28.790 17:43:18 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:28.790 17:43:18 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.790 17:43:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:29.050 17:43:18 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.050 17:43:18 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:29.050 17:43:18 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:29.050 17:43:18 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.050 17:43:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:29.050 [2024-10-13 17:43:18.783792] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:29.050 [2024-10-13 17:43:18.784277] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:29.050 [2024-10-13 17:43:18.784308] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:29.050 [2024-10-13 17:43:18.784317] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:29.050 [2024-10-13 17:43:18.792912] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:29.050 [2024-10-13 17:43:18.792948] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:29.050 [2024-10-13 17:43:18.799636] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:29.050 [2024-10-13 17:43:18.808658] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:29.050 [2024-10-13 17:43:18.834666] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:29.050 17:43:18 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.050 17:43:18 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:29.050 17:43:18 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:29.050 17:43:18 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:29.050 17:43:18 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.050 17:43:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:29.050 17:43:18 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.050 17:43:18 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:29.050 { 00:13:29.050 "ublk_device": "/dev/ublkb0", 00:13:29.050 "id": 0, 00:13:29.050 "queue_depth": 512, 00:13:29.050 "num_queues": 4, 00:13:29.050 "bdev_name": "Malloc0" 00:13:29.050 } 00:13:29.050 ]' 00:13:29.050 17:43:18 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:29.311 17:43:18 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:29.311 17:43:18 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:29.311 17:43:18 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:29.311 17:43:18 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:29.311 17:43:18 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:29.311 17:43:18 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:29.311 17:43:19 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:29.311 17:43:19 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:29.311 17:43:19 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:29.311 17:43:19 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:29.311 17:43:19 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:29.311 17:43:19 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:13:29.311 17:43:19 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:13:29.311 17:43:19 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:13:29.311 17:43:19 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:29.311 17:43:19 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:29.311 17:43:19 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:29.311 17:43:19 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:29.311 17:43:19 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:29.312 17:43:19 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:29.312 17:43:19 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:29.572 fio: verification read phase will never start because write phase uses all of runtime 00:13:29.572 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:29.572 fio-3.35 00:13:29.572 Starting 1 process 00:13:39.556 00:13:39.556 fio_test: (groupid=0, jobs=1): err= 0: pid=71368: Sun Oct 13 17:43:29 2024 00:13:39.556 write: IOPS=13.4k, BW=52.5MiB/s (55.1MB/s)(525MiB/10001msec); 0 zone resets 00:13:39.556 clat (usec): min=49, max=8955, avg=73.59, stdev=129.07 00:13:39.556 lat (usec): min=50, max=8974, avg=74.03, stdev=129.14 00:13:39.556 clat percentiles (usec): 00:13:39.556 | 1.00th=[ 56], 5.00th=[ 58], 10.00th=[ 60], 20.00th=[ 61], 00:13:39.556 | 30.00th=[ 63], 40.00th=[ 64], 50.00th=[ 66], 60.00th=[ 68], 00:13:39.556 | 70.00th=[ 70], 80.00th=[ 73], 90.00th=[ 78], 95.00th=[ 84], 00:13:39.556 | 99.00th=[ 159], 99.50th=[ 239], 99.90th=[ 2769], 99.95th=[ 3556], 00:13:39.556 | 99.99th=[ 4113] 00:13:39.556 bw ( KiB/s): min=10504, max=59144, per=99.64%, avg=53594.11, stdev=11779.91, samples=19 00:13:39.556 iops : min= 2626, max=14786, avg=13398.53, stdev=2944.98, samples=19 00:13:39.556 lat (usec) : 50=0.01%, 100=97.46%, 250=2.11%, 500=0.23%, 750=0.01% 00:13:39.556 lat (usec) : 1000=0.01% 00:13:39.556 lat (msec) : 2=0.05%, 4=0.12%, 10=0.02% 00:13:39.556 cpu : usr=2.03%, sys=10.14%, ctx=134487, majf=0, minf=798 00:13:39.556 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:39.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.556 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.556 issued rwts: total=0,134485,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.556 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:39.556 00:13:39.556 Run status group 0 (all jobs): 00:13:39.556 WRITE: bw=52.5MiB/s (55.1MB/s), 52.5MiB/s-52.5MiB/s (55.1MB/s-55.1MB/s), io=525MiB (551MB), run=10001-10001msec 00:13:39.556 00:13:39.556 Disk stats (read/write): 00:13:39.556 ublkb0: ios=0/132976, merge=0/0, ticks=0/8674, in_queue=8675, util=99.03% 00:13:39.556 17:43:29 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:13:39.556 17:43:29 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.556 17:43:29 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:39.556 [2024-10-13 17:43:29.264015] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:39.556 [2024-10-13 17:43:29.293617] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:39.556 [2024-10-13 17:43:29.294306] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:39.557 [2024-10-13 17:43:29.302630] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:39.557 [2024-10-13 17:43:29.302887] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:39.557 [2024-10-13 17:43:29.302900] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:39.557 17:43:29 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.557 17:43:29 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:13:39.557 17:43:29 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:13:39.557 17:43:29 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:13:39.557 17:43:29 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:39.557 17:43:29 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:39.557 17:43:29 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:39.557 17:43:29 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:39.557 17:43:29 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:13:39.557 17:43:29 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.557 17:43:29 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:39.557 [2024-10-13 17:43:29.317647] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:13:39.557 request: 00:13:39.557 { 00:13:39.557 "ublk_id": 0, 00:13:39.557 "method": "ublk_stop_disk", 00:13:39.557 "req_id": 1 00:13:39.557 } 00:13:39.557 Got JSON-RPC error response 00:13:39.557 response: 00:13:39.557 { 00:13:39.557 "code": -19, 00:13:39.557 "message": "No such device" 00:13:39.557 } 00:13:39.557 17:43:29 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:39.557 17:43:29 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:13:39.557 17:43:29 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:39.557 17:43:29 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:39.557 17:43:29 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:39.557 17:43:29 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:13:39.557 17:43:29 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.557 17:43:29 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:39.557 [2024-10-13 17:43:29.333643] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:39.557 [2024-10-13 17:43:29.341572] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:39.557 [2024-10-13 17:43:29.341605] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:39.557 17:43:29 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.557 17:43:29 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:39.557 17:43:29 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.557 17:43:29 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:40.124 17:43:29 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.124 17:43:29 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:13:40.124 17:43:29 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:40.124 17:43:29 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.124 17:43:29 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:40.124 17:43:29 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.124 17:43:29 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:40.124 17:43:29 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:13:40.124 17:43:29 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:40.124 17:43:29 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:40.124 17:43:29 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.124 17:43:29 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:40.124 17:43:29 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.124 17:43:29 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:40.124 17:43:29 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:13:40.124 17:43:29 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:40.124 00:13:40.124 real 0m11.233s 00:13:40.124 user 0m0.520s 00:13:40.124 sys 0m1.096s 00:13:40.124 17:43:29 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:40.124 17:43:29 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:40.124 ************************************ 00:13:40.124 END TEST test_create_ublk 00:13:40.124 ************************************ 00:13:40.124 17:43:29 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:13:40.124 17:43:29 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:40.124 17:43:29 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:40.125 17:43:29 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:40.125 ************************************ 00:13:40.125 START TEST test_create_multi_ublk 00:13:40.125 ************************************ 00:13:40.125 17:43:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:13:40.125 17:43:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:13:40.125 17:43:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.125 17:43:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:40.125 [2024-10-13 17:43:29.840573] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:40.125 [2024-10-13 17:43:29.842089] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:40.125 17:43:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.125 17:43:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:13:40.125 17:43:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:13:40.125 17:43:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:40.125 17:43:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:13:40.125 17:43:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.125 17:43:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:40.383 17:43:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.383 17:43:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:13:40.383 17:43:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:40.383 17:43:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.383 17:43:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:40.383 [2024-10-13 17:43:30.068693] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:40.383 [2024-10-13 17:43:30.069001] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:40.383 [2024-10-13 17:43:30.069012] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:40.383 [2024-10-13 17:43:30.069020] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:40.383 [2024-10-13 17:43:30.086581] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:40.383 [2024-10-13 17:43:30.086598] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:40.383 [2024-10-13 17:43:30.092575] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:40.383 [2024-10-13 17:43:30.093072] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:40.383 [2024-10-13 17:43:30.144586] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:40.383 17:43:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.383 17:43:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:13:40.383 17:43:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:40.383 17:43:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:13:40.383 17:43:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.383 17:43:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:40.642 17:43:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.642 17:43:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:13:40.642 17:43:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:13:40.642 17:43:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.642 17:43:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:40.642 [2024-10-13 17:43:30.384673] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:13:40.642 [2024-10-13 17:43:30.384975] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:13:40.642 [2024-10-13 17:43:30.384988] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:40.642 [2024-10-13 17:43:30.384993] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:40.642 [2024-10-13 17:43:30.396595] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:40.642 [2024-10-13 17:43:30.396612] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:40.642 [2024-10-13 17:43:30.408581] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:40.642 [2024-10-13 17:43:30.409077] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:40.642 [2024-10-13 17:43:30.444591] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:40.900 17:43:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.900 17:43:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:13:40.900 17:43:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:40.900 17:43:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:13:40.900 17:43:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.900 17:43:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:40.900 17:43:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.900 17:43:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:13:40.900 17:43:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:13:40.900 17:43:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.900 17:43:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:40.900 [2024-10-13 17:43:30.684678] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:13:40.900 [2024-10-13 17:43:30.684976] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:13:40.900 [2024-10-13 17:43:30.684983] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:13:40.900 [2024-10-13 17:43:30.684989] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:13:40.900 [2024-10-13 17:43:30.696591] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:40.900 [2024-10-13 17:43:30.696611] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:40.900 [2024-10-13 17:43:30.708576] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:40.900 [2024-10-13 17:43:30.709064] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:13:41.158 [2024-10-13 17:43:30.733580] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:13:41.158 17:43:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.158 17:43:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:13:41.158 17:43:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:41.158 17:43:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:13:41.158 17:43:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.158 17:43:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:41.158 17:43:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.158 17:43:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:13:41.158 17:43:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:13:41.158 17:43:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.158 17:43:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:41.158 [2024-10-13 17:43:30.971679] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:13:41.158 [2024-10-13 17:43:30.971981] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:13:41.158 [2024-10-13 17:43:30.971994] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:13:41.158 [2024-10-13 17:43:30.971999] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:13:41.417 [2024-10-13 17:43:30.983603] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:41.417 [2024-10-13 17:43:30.983620] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:41.417 [2024-10-13 17:43:30.995588] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:41.417 [2024-10-13 17:43:30.996086] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:13:41.417 [2024-10-13 17:43:31.000349] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:13:41.417 17:43:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.417 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:13:41.417 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:13:41.417 17:43:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.417 17:43:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:41.417 17:43:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.417 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:13:41.417 { 00:13:41.417 "ublk_device": "/dev/ublkb0", 00:13:41.417 "id": 0, 00:13:41.417 "queue_depth": 512, 00:13:41.417 "num_queues": 4, 00:13:41.417 "bdev_name": "Malloc0" 00:13:41.417 }, 00:13:41.417 { 00:13:41.417 "ublk_device": "/dev/ublkb1", 00:13:41.417 "id": 1, 00:13:41.417 "queue_depth": 512, 00:13:41.417 "num_queues": 4, 00:13:41.417 "bdev_name": "Malloc1" 00:13:41.417 }, 00:13:41.417 { 00:13:41.417 "ublk_device": "/dev/ublkb2", 00:13:41.417 "id": 2, 00:13:41.417 "queue_depth": 512, 00:13:41.417 "num_queues": 4, 00:13:41.417 "bdev_name": "Malloc2" 00:13:41.417 }, 00:13:41.417 { 00:13:41.417 "ublk_device": "/dev/ublkb3", 00:13:41.417 "id": 3, 00:13:41.417 "queue_depth": 512, 00:13:41.417 "num_queues": 4, 00:13:41.417 "bdev_name": "Malloc3" 00:13:41.417 } 00:13:41.417 ]' 00:13:41.417 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:13:41.417 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:41.417 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:13:41.417 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:41.417 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:13:41.417 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:13:41.417 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:13:41.417 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:41.417 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:13:41.417 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:41.417 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:13:41.417 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:41.417 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:41.417 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:13:41.676 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:13:41.676 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:13:41.676 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:13:41.676 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:13:41.676 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:41.676 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:13:41.676 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:41.676 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:13:41.676 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:13:41.676 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:41.676 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:13:41.676 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:13:41.676 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:13:41.676 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:13:41.676 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:13:41.676 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:41.676 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:13:41.934 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:41.934 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:13:41.934 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:13:41.934 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:41.934 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:13:41.934 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:13:41.934 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:13:41.934 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:13:41.934 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:13:41.934 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:41.934 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:13:41.934 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:41.934 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:13:41.934 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:13:41.934 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:13:41.934 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:13:41.934 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:41.934 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:13:41.934 17:43:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.934 17:43:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:41.934 [2024-10-13 17:43:31.686656] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:41.934 [2024-10-13 17:43:31.735626] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:41.934 [2024-10-13 17:43:31.736508] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:41.934 [2024-10-13 17:43:31.743602] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:41.934 [2024-10-13 17:43:31.743870] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:41.934 [2024-10-13 17:43:31.743883] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:42.193 17:43:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.193 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:42.193 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:13:42.193 17:43:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.193 17:43:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:42.193 [2024-10-13 17:43:31.759632] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:13:42.193 [2024-10-13 17:43:31.795624] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:42.193 [2024-10-13 17:43:31.796467] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:13:42.193 [2024-10-13 17:43:31.799716] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:42.193 [2024-10-13 17:43:31.799962] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:13:42.193 [2024-10-13 17:43:31.799973] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:13:42.193 17:43:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.193 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:42.193 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:13:42.193 17:43:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.193 17:43:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:42.193 [2024-10-13 17:43:31.818651] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:13:42.193 [2024-10-13 17:43:31.850614] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:42.193 [2024-10-13 17:43:31.851384] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:13:42.193 [2024-10-13 17:43:31.858584] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:42.193 [2024-10-13 17:43:31.858812] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:13:42.193 [2024-10-13 17:43:31.858825] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:13:42.193 17:43:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.193 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:42.193 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:13:42.193 17:43:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.193 17:43:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:42.193 [2024-10-13 17:43:31.874647] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:13:42.193 [2024-10-13 17:43:31.905079] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:42.193 [2024-10-13 17:43:31.906042] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:13:42.193 [2024-10-13 17:43:31.914599] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:42.193 [2024-10-13 17:43:31.914845] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:13:42.193 [2024-10-13 17:43:31.914857] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:13:42.193 17:43:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.193 17:43:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:13:42.452 [2024-10-13 17:43:32.114633] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:42.452 [2024-10-13 17:43:32.122576] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:42.452 [2024-10-13 17:43:32.122603] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:42.452 17:43:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:13:42.452 17:43:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:42.452 17:43:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:42.452 17:43:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.452 17:43:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:42.710 17:43:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.710 17:43:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:42.710 17:43:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:13:42.710 17:43:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.710 17:43:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:43.278 17:43:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.278 17:43:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:43.278 17:43:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:13:43.278 17:43:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.278 17:43:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:43.278 17:43:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.278 17:43:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:43.278 17:43:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:13:43.278 17:43:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.278 17:43:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:43.536 17:43:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.536 17:43:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:13:43.536 17:43:33 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:43.536 17:43:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.536 17:43:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:43.536 17:43:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.536 17:43:33 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:43.536 17:43:33 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:13:43.536 17:43:33 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:43.536 17:43:33 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:43.536 17:43:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.536 17:43:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:43.536 17:43:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.536 17:43:33 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:43.536 17:43:33 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:13:43.536 17:43:33 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:43.536 00:13:43.536 real 0m3.482s 00:13:43.536 user 0m0.825s 00:13:43.536 sys 0m0.135s 00:13:43.536 17:43:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:43.536 ************************************ 00:13:43.536 END TEST test_create_multi_ublk 00:13:43.536 ************************************ 00:13:43.536 17:43:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:43.536 17:43:33 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:13:43.536 17:43:33 ublk -- ublk/ublk.sh@147 -- # cleanup 00:13:43.536 17:43:33 ublk -- ublk/ublk.sh@130 -- # killprocess 71325 00:13:43.536 17:43:33 ublk -- common/autotest_common.sh@950 -- # '[' -z 71325 ']' 00:13:43.536 17:43:33 ublk -- common/autotest_common.sh@954 -- # kill -0 71325 00:13:43.536 17:43:33 ublk -- common/autotest_common.sh@955 -- # uname 00:13:43.536 17:43:33 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:43.536 17:43:33 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71325 00:13:43.795 17:43:33 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:43.795 killing process with pid 71325 00:13:43.795 17:43:33 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:43.795 17:43:33 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71325' 00:13:43.795 17:43:33 ublk -- common/autotest_common.sh@969 -- # kill 71325 00:13:43.795 17:43:33 ublk -- common/autotest_common.sh@974 -- # wait 71325 00:13:44.362 [2024-10-13 17:43:33.905698] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:44.362 [2024-10-13 17:43:33.905746] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:44.929 00:13:44.929 real 0m25.184s 00:13:44.929 user 0m35.866s 00:13:44.929 sys 0m9.254s 00:13:44.929 17:43:34 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:44.929 17:43:34 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:44.929 ************************************ 00:13:44.929 END TEST ublk 00:13:44.929 ************************************ 00:13:44.929 17:43:34 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:44.929 17:43:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:44.929 17:43:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:44.929 17:43:34 -- common/autotest_common.sh@10 -- # set +x 00:13:44.929 ************************************ 00:13:44.929 START TEST ublk_recovery 00:13:44.929 ************************************ 00:13:44.929 17:43:34 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:44.929 * Looking for test storage... 00:13:44.929 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:44.929 17:43:34 ublk_recovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:44.929 17:43:34 ublk_recovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:44.929 17:43:34 ublk_recovery -- common/autotest_common.sh@1691 -- # lcov --version 00:13:44.929 17:43:34 ublk_recovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:44.929 17:43:34 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:44.929 17:43:34 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:44.929 17:43:34 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:44.929 17:43:34 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:13:44.929 17:43:34 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:13:44.929 17:43:34 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:13:44.929 17:43:34 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:13:44.929 17:43:34 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:13:44.929 17:43:34 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:13:44.929 17:43:34 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:13:44.929 17:43:34 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:44.929 17:43:34 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:13:44.929 17:43:34 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:13:44.929 17:43:34 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:44.929 17:43:34 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:44.929 17:43:34 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:13:44.929 17:43:34 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:13:44.929 17:43:34 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:44.929 17:43:34 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:13:44.930 17:43:34 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:13:45.189 17:43:34 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:13:45.189 17:43:34 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:13:45.189 17:43:34 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:45.189 17:43:34 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:13:45.189 17:43:34 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:13:45.189 17:43:34 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:45.189 17:43:34 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:45.189 17:43:34 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:13:45.189 17:43:34 ublk_recovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:45.189 17:43:34 ublk_recovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:45.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.189 --rc genhtml_branch_coverage=1 00:13:45.189 --rc genhtml_function_coverage=1 00:13:45.189 --rc genhtml_legend=1 00:13:45.189 --rc geninfo_all_blocks=1 00:13:45.189 --rc geninfo_unexecuted_blocks=1 00:13:45.189 00:13:45.189 ' 00:13:45.189 17:43:34 ublk_recovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:45.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.189 --rc genhtml_branch_coverage=1 00:13:45.189 --rc genhtml_function_coverage=1 00:13:45.189 --rc genhtml_legend=1 00:13:45.189 --rc geninfo_all_blocks=1 00:13:45.189 --rc geninfo_unexecuted_blocks=1 00:13:45.189 00:13:45.189 ' 00:13:45.189 17:43:34 ublk_recovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:45.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.189 --rc genhtml_branch_coverage=1 00:13:45.189 --rc genhtml_function_coverage=1 00:13:45.189 --rc genhtml_legend=1 00:13:45.189 --rc geninfo_all_blocks=1 00:13:45.189 --rc geninfo_unexecuted_blocks=1 00:13:45.189 00:13:45.189 ' 00:13:45.189 17:43:34 ublk_recovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:45.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.189 --rc genhtml_branch_coverage=1 00:13:45.189 --rc genhtml_function_coverage=1 00:13:45.189 --rc genhtml_legend=1 00:13:45.189 --rc geninfo_all_blocks=1 00:13:45.189 --rc geninfo_unexecuted_blocks=1 00:13:45.189 00:13:45.189 ' 00:13:45.189 17:43:34 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:45.189 17:43:34 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:45.189 17:43:34 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:45.189 17:43:34 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:45.189 17:43:34 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:45.189 17:43:34 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:45.189 17:43:34 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:45.189 17:43:34 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:45.189 17:43:34 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:45.189 17:43:34 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:13:45.189 17:43:34 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=71714 00:13:45.189 17:43:34 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:45.189 17:43:34 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 71714 00:13:45.189 17:43:34 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 71714 ']' 00:13:45.189 17:43:34 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.189 17:43:34 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:45.189 17:43:34 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:45.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.189 17:43:34 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.189 17:43:34 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:45.189 17:43:34 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:45.189 [2024-10-13 17:43:34.822469] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:13:45.189 [2024-10-13 17:43:34.822578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71714 ] 00:13:45.189 [2024-10-13 17:43:34.963534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:45.450 [2024-10-13 17:43:35.041636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.450 [2024-10-13 17:43:35.041640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.017 17:43:35 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:46.017 17:43:35 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:13:46.017 17:43:35 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:13:46.017 17:43:35 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.017 17:43:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.017 [2024-10-13 17:43:35.666580] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:46.017 [2024-10-13 17:43:35.668089] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:46.017 17:43:35 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.017 17:43:35 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:46.017 17:43:35 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.017 17:43:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.017 malloc0 00:13:46.017 17:43:35 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.017 17:43:35 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:13:46.017 17:43:35 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.017 17:43:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.017 [2024-10-13 17:43:35.751690] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:13:46.017 [2024-10-13 17:43:35.751762] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:13:46.017 [2024-10-13 17:43:35.751775] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:46.017 [2024-10-13 17:43:35.751781] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:46.017 [2024-10-13 17:43:35.759590] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:46.017 [2024-10-13 17:43:35.759606] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:46.017 [2024-10-13 17:43:35.767589] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:46.017 [2024-10-13 17:43:35.767703] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:46.017 [2024-10-13 17:43:35.789586] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:46.017 1 00:13:46.017 17:43:35 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.017 17:43:35 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:13:47.394 17:43:36 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=71751 00:13:47.394 17:43:36 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:13:47.394 17:43:36 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:13:47.394 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:47.394 fio-3.35 00:13:47.394 Starting 1 process 00:13:52.699 17:43:41 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 71714 00:13:52.699 17:43:41 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:13:57.986 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 71714 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:13:57.986 17:43:46 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=71860 00:13:57.986 17:43:46 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:57.986 17:43:46 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 71860 00:13:57.986 17:43:46 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 71860 ']' 00:13:57.986 17:43:46 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.986 17:43:46 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:57.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.986 17:43:46 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.986 17:43:46 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:57.986 17:43:46 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:57.986 17:43:46 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:57.986 [2024-10-13 17:43:46.915533] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:13:57.986 [2024-10-13 17:43:46.915712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71860 ] 00:13:57.986 [2024-10-13 17:43:47.071396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:57.986 [2024-10-13 17:43:47.169352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.986 [2024-10-13 17:43:47.169463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.248 17:43:47 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:58.248 17:43:47 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:13:58.248 17:43:47 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:13:58.248 17:43:47 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.248 17:43:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:58.248 [2024-10-13 17:43:47.844585] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:58.248 [2024-10-13 17:43:47.846919] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:58.248 17:43:47 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.248 17:43:47 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:58.248 17:43:47 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.248 17:43:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:58.248 malloc0 00:13:58.248 17:43:47 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.248 17:43:47 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:13:58.248 17:43:47 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.248 17:43:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:58.248 [2024-10-13 17:43:47.964789] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:13:58.248 [2024-10-13 17:43:47.964843] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:58.248 [2024-10-13 17:43:47.964854] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:13:58.248 [2024-10-13 17:43:47.972650] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:13:58.248 [2024-10-13 17:43:47.972680] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:13:58.248 [2024-10-13 17:43:47.972690] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:13:58.248 [2024-10-13 17:43:47.972787] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:13:58.248 1 00:13:58.248 17:43:47 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.248 17:43:47 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 71751 00:13:58.248 [2024-10-13 17:43:47.980592] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:13:58.248 [2024-10-13 17:43:47.988739] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:13:58.248 [2024-10-13 17:43:47.995920] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:13:58.248 [2024-10-13 17:43:47.995951] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:14:54.477 00:14:54.477 fio_test: (groupid=0, jobs=1): err= 0: pid=71758: Sun Oct 13 17:44:37 2024 00:14:54.477 read: IOPS=24.3k, BW=94.8MiB/s (99.4MB/s)(5686MiB/60002msec) 00:14:54.477 slat (nsec): min=1197, max=806572, avg=5373.58, stdev=2069.03 00:14:54.477 clat (usec): min=930, max=6201.5k, avg=2582.69, stdev=40459.13 00:14:54.477 lat (usec): min=935, max=6201.5k, avg=2588.07, stdev=40459.13 00:14:54.477 clat percentiles (usec): 00:14:54.477 | 1.00th=[ 1844], 5.00th=[ 1942], 10.00th=[ 1991], 20.00th=[ 2089], 00:14:54.477 | 30.00th=[ 2114], 40.00th=[ 2147], 50.00th=[ 2180], 60.00th=[ 2180], 00:14:54.477 | 70.00th=[ 2212], 80.00th=[ 2212], 90.00th=[ 2311], 95.00th=[ 3523], 00:14:54.477 | 99.00th=[ 5932], 99.50th=[ 6915], 99.90th=[ 8356], 99.95th=[12780], 00:14:54.477 | 99.99th=[13435] 00:14:54.477 bw ( KiB/s): min= 1040, max=123384, per=100.00%, avg=106767.81, stdev=18398.81, samples=108 00:14:54.477 iops : min= 260, max=30846, avg=26691.94, stdev=4599.70, samples=108 00:14:54.477 write: IOPS=24.2k, BW=94.7MiB/s (99.3MB/s)(5680MiB/60002msec); 0 zone resets 00:14:54.477 slat (nsec): min=1173, max=527555, avg=5587.06, stdev=2095.18 00:14:54.477 clat (usec): min=731, max=6201.6k, avg=2684.60, stdev=41765.59 00:14:54.477 lat (usec): min=736, max=6201.6k, avg=2690.19, stdev=41765.61 00:14:54.477 clat percentiles (usec): 00:14:54.477 | 1.00th=[ 1909], 5.00th=[ 2024], 10.00th=[ 2073], 20.00th=[ 2180], 00:14:54.477 | 30.00th=[ 2212], 40.00th=[ 2245], 50.00th=[ 2278], 60.00th=[ 2278], 00:14:54.477 | 70.00th=[ 2311], 80.00th=[ 2343], 90.00th=[ 2376], 95.00th=[ 3458], 00:14:54.477 | 99.00th=[ 5997], 99.50th=[ 7046], 99.90th=[ 8455], 99.95th=[12911], 00:14:54.477 | 99.99th=[13435] 00:14:54.477 bw ( KiB/s): min= 1000, max=123232, per=100.00%, avg=106657.85, stdev=18263.51, samples=108 00:14:54.477 iops : min= 250, max=30808, avg=26664.46, stdev=4565.88, samples=108 00:14:54.477 lat (usec) : 750=0.01%, 1000=0.01% 00:14:54.477 lat (msec) : 2=6.99%, 4=89.23%, 10=3.72%, 20=0.06%, >=2000=0.01% 00:14:54.477 cpu : usr=5.43%, sys=27.47%, ctx=96111, majf=0, minf=14 00:14:54.477 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:14:54.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:54.477 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:54.477 issued rwts: total=1455598,1454029,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:54.477 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:54.477 00:14:54.477 Run status group 0 (all jobs): 00:14:54.477 READ: bw=94.8MiB/s (99.4MB/s), 94.8MiB/s-94.8MiB/s (99.4MB/s-99.4MB/s), io=5686MiB (5962MB), run=60002-60002msec 00:14:54.477 WRITE: bw=94.7MiB/s (99.3MB/s), 94.7MiB/s-94.7MiB/s (99.3MB/s-99.3MB/s), io=5680MiB (5956MB), run=60002-60002msec 00:14:54.477 00:14:54.477 Disk stats (read/write): 00:14:54.477 ublkb1: ios=1452239/1450698, merge=0/0, ticks=3670198/3692035, in_queue=7362233, util=99.90% 00:14:54.477 17:44:37 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:14:54.477 17:44:37 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.477 17:44:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:54.477 [2024-10-13 17:44:37.053709] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:54.477 [2024-10-13 17:44:37.090600] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:54.477 [2024-10-13 17:44:37.090763] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:54.477 [2024-10-13 17:44:37.100589] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:54.477 [2024-10-13 17:44:37.100682] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:54.477 [2024-10-13 17:44:37.100691] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:54.477 17:44:37 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.477 17:44:37 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:14:54.477 17:44:37 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.477 17:44:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:54.477 [2024-10-13 17:44:37.116644] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:54.477 [2024-10-13 17:44:37.124575] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:54.477 [2024-10-13 17:44:37.124605] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:54.477 17:44:37 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.477 17:44:37 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:14:54.477 17:44:37 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:14:54.477 17:44:37 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 71860 00:14:54.477 17:44:37 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 71860 ']' 00:14:54.477 17:44:37 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 71860 00:14:54.477 17:44:37 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:14:54.477 17:44:37 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:54.477 17:44:37 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71860 00:14:54.477 killing process with pid 71860 00:14:54.477 17:44:37 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:54.477 17:44:37 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:54.477 17:44:37 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71860' 00:14:54.477 17:44:37 ublk_recovery -- common/autotest_common.sh@969 -- # kill 71860 00:14:54.477 17:44:37 ublk_recovery -- common/autotest_common.sh@974 -- # wait 71860 00:14:54.477 [2024-10-13 17:44:38.208721] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:54.477 [2024-10-13 17:44:38.208769] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:54.477 ************************************ 00:14:54.477 END TEST ublk_recovery 00:14:54.477 ************************************ 00:14:54.477 00:14:54.477 real 1m4.359s 00:14:54.477 user 1m38.727s 00:14:54.477 sys 0m38.936s 00:14:54.477 17:44:38 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:54.477 17:44:38 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:54.477 17:44:39 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:14:54.477 17:44:39 -- spdk/autotest.sh@256 -- # timing_exit lib 00:14:54.477 17:44:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:54.477 17:44:39 -- common/autotest_common.sh@10 -- # set +x 00:14:54.477 17:44:39 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:14:54.477 17:44:39 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:14:54.477 17:44:39 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:14:54.477 17:44:39 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:14:54.477 17:44:39 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:14:54.477 17:44:39 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:14:54.477 17:44:39 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:14:54.477 17:44:39 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:14:54.477 17:44:39 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:14:54.477 17:44:39 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:14:54.477 17:44:39 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:54.477 17:44:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:54.477 17:44:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:54.477 17:44:39 -- common/autotest_common.sh@10 -- # set +x 00:14:54.477 ************************************ 00:14:54.477 START TEST ftl 00:14:54.477 ************************************ 00:14:54.477 17:44:39 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:54.477 * Looking for test storage... 00:14:54.477 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:54.477 17:44:39 ftl -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:54.477 17:44:39 ftl -- common/autotest_common.sh@1691 -- # lcov --version 00:14:54.477 17:44:39 ftl -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:54.477 17:44:39 ftl -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:54.477 17:44:39 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:54.477 17:44:39 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:54.477 17:44:39 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:54.477 17:44:39 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:14:54.477 17:44:39 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:14:54.477 17:44:39 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:14:54.477 17:44:39 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:14:54.477 17:44:39 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:14:54.477 17:44:39 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:14:54.477 17:44:39 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:14:54.477 17:44:39 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:54.477 17:44:39 ftl -- scripts/common.sh@344 -- # case "$op" in 00:14:54.477 17:44:39 ftl -- scripts/common.sh@345 -- # : 1 00:14:54.477 17:44:39 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:54.477 17:44:39 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:54.477 17:44:39 ftl -- scripts/common.sh@365 -- # decimal 1 00:14:54.477 17:44:39 ftl -- scripts/common.sh@353 -- # local d=1 00:14:54.477 17:44:39 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:54.477 17:44:39 ftl -- scripts/common.sh@355 -- # echo 1 00:14:54.477 17:44:39 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:14:54.477 17:44:39 ftl -- scripts/common.sh@366 -- # decimal 2 00:14:54.477 17:44:39 ftl -- scripts/common.sh@353 -- # local d=2 00:14:54.477 17:44:39 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:54.477 17:44:39 ftl -- scripts/common.sh@355 -- # echo 2 00:14:54.477 17:44:39 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:14:54.478 17:44:39 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:54.478 17:44:39 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:54.478 17:44:39 ftl -- scripts/common.sh@368 -- # return 0 00:14:54.478 17:44:39 ftl -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:54.478 17:44:39 ftl -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:54.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.478 --rc genhtml_branch_coverage=1 00:14:54.478 --rc genhtml_function_coverage=1 00:14:54.478 --rc genhtml_legend=1 00:14:54.478 --rc geninfo_all_blocks=1 00:14:54.478 --rc geninfo_unexecuted_blocks=1 00:14:54.478 00:14:54.478 ' 00:14:54.478 17:44:39 ftl -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:54.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.478 --rc genhtml_branch_coverage=1 00:14:54.478 --rc genhtml_function_coverage=1 00:14:54.478 --rc genhtml_legend=1 00:14:54.478 --rc geninfo_all_blocks=1 00:14:54.478 --rc geninfo_unexecuted_blocks=1 00:14:54.478 00:14:54.478 ' 00:14:54.478 17:44:39 ftl -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:54.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.478 --rc genhtml_branch_coverage=1 00:14:54.478 --rc genhtml_function_coverage=1 00:14:54.478 --rc genhtml_legend=1 00:14:54.478 --rc geninfo_all_blocks=1 00:14:54.478 --rc geninfo_unexecuted_blocks=1 00:14:54.478 00:14:54.478 ' 00:14:54.478 17:44:39 ftl -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:54.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.478 --rc genhtml_branch_coverage=1 00:14:54.478 --rc genhtml_function_coverage=1 00:14:54.478 --rc genhtml_legend=1 00:14:54.478 --rc geninfo_all_blocks=1 00:14:54.478 --rc geninfo_unexecuted_blocks=1 00:14:54.478 00:14:54.478 ' 00:14:54.478 17:44:39 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:54.478 17:44:39 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:54.478 17:44:39 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:54.478 17:44:39 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:54.478 17:44:39 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:54.478 17:44:39 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:54.478 17:44:39 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:54.478 17:44:39 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:54.478 17:44:39 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:54.478 17:44:39 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:54.478 17:44:39 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:54.478 17:44:39 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:54.478 17:44:39 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:54.478 17:44:39 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:54.478 17:44:39 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:54.478 17:44:39 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:54.478 17:44:39 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:54.478 17:44:39 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:54.478 17:44:39 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:54.478 17:44:39 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:54.478 17:44:39 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:54.478 17:44:39 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:54.478 17:44:39 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:54.478 17:44:39 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:54.478 17:44:39 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:54.478 17:44:39 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:54.478 17:44:39 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:54.478 17:44:39 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:54.478 17:44:39 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:54.478 17:44:39 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:54.478 17:44:39 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:14:54.478 17:44:39 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:14:54.478 17:44:39 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:14:54.478 17:44:39 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:14:54.478 17:44:39 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:54.478 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:54.478 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:54.478 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:54.478 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:54.478 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:54.478 17:44:39 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=72665 00:14:54.478 17:44:39 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:14:54.478 17:44:39 ftl -- ftl/ftl.sh@38 -- # waitforlisten 72665 00:14:54.478 17:44:39 ftl -- common/autotest_common.sh@831 -- # '[' -z 72665 ']' 00:14:54.478 17:44:39 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.478 17:44:39 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:54.478 17:44:39 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.478 17:44:39 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:54.478 17:44:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:14:54.478 [2024-10-13 17:44:39.787174] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:14:54.478 [2024-10-13 17:44:39.787528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72665 ] 00:14:54.478 [2024-10-13 17:44:39.942713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.478 [2024-10-13 17:44:40.043824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.478 17:44:40 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:54.478 17:44:40 ftl -- common/autotest_common.sh@864 -- # return 0 00:14:54.478 17:44:40 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:14:54.478 17:44:40 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:54.478 17:44:41 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:54.478 17:44:41 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:14:54.478 17:44:41 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:14:54.478 17:44:41 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:54.478 17:44:41 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:54.478 17:44:42 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:14:54.478 17:44:42 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:14:54.478 17:44:42 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:14:54.478 17:44:42 ftl -- ftl/ftl.sh@50 -- # break 00:14:54.478 17:44:42 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:14:54.478 17:44:42 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:14:54.478 17:44:42 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:54.478 17:44:42 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:54.478 17:44:42 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:14:54.478 17:44:42 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:14:54.478 17:44:42 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:14:54.478 17:44:42 ftl -- ftl/ftl.sh@63 -- # break 00:14:54.478 17:44:42 ftl -- ftl/ftl.sh@66 -- # killprocess 72665 00:14:54.478 17:44:42 ftl -- common/autotest_common.sh@950 -- # '[' -z 72665 ']' 00:14:54.478 17:44:42 ftl -- common/autotest_common.sh@954 -- # kill -0 72665 00:14:54.478 17:44:42 ftl -- common/autotest_common.sh@955 -- # uname 00:14:54.478 17:44:42 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:54.478 17:44:42 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72665 00:14:54.478 killing process with pid 72665 00:14:54.478 17:44:42 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:54.478 17:44:42 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:54.478 17:44:42 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72665' 00:14:54.478 17:44:42 ftl -- common/autotest_common.sh@969 -- # kill 72665 00:14:54.478 17:44:42 ftl -- common/autotest_common.sh@974 -- # wait 72665 00:14:54.478 17:44:43 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:14:54.478 17:44:43 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:14:54.478 17:44:43 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:54.478 17:44:43 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:54.478 17:44:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:14:54.478 ************************************ 00:14:54.478 START TEST ftl_fio_basic 00:14:54.478 ************************************ 00:14:54.478 17:44:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:14:54.478 * Looking for test storage... 00:14:54.478 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lcov --version 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:54.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.479 --rc genhtml_branch_coverage=1 00:14:54.479 --rc genhtml_function_coverage=1 00:14:54.479 --rc genhtml_legend=1 00:14:54.479 --rc geninfo_all_blocks=1 00:14:54.479 --rc geninfo_unexecuted_blocks=1 00:14:54.479 00:14:54.479 ' 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:54.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.479 --rc genhtml_branch_coverage=1 00:14:54.479 --rc genhtml_function_coverage=1 00:14:54.479 --rc genhtml_legend=1 00:14:54.479 --rc geninfo_all_blocks=1 00:14:54.479 --rc geninfo_unexecuted_blocks=1 00:14:54.479 00:14:54.479 ' 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:54.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.479 --rc genhtml_branch_coverage=1 00:14:54.479 --rc genhtml_function_coverage=1 00:14:54.479 --rc genhtml_legend=1 00:14:54.479 --rc geninfo_all_blocks=1 00:14:54.479 --rc geninfo_unexecuted_blocks=1 00:14:54.479 00:14:54.479 ' 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:54.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.479 --rc genhtml_branch_coverage=1 00:14:54.479 --rc genhtml_function_coverage=1 00:14:54.479 --rc genhtml_legend=1 00:14:54.479 --rc geninfo_all_blocks=1 00:14:54.479 --rc geninfo_unexecuted_blocks=1 00:14:54.479 00:14:54.479 ' 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=72797 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 72797 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 72797 ']' 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:14:54.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:54.479 17:44:43 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:14:54.479 [2024-10-13 17:44:43.950492] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:14:54.479 [2024-10-13 17:44:43.951103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72797 ] 00:14:54.479 [2024-10-13 17:44:44.107593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:54.479 [2024-10-13 17:44:44.225693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:54.479 [2024-10-13 17:44:44.225790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.479 [2024-10-13 17:44:44.225815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:55.051 17:44:44 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:55.051 17:44:44 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:14:55.051 17:44:44 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:14:55.051 17:44:44 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:14:55.051 17:44:44 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:14:55.051 17:44:44 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:14:55.051 17:44:44 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:14:55.051 17:44:44 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:14:55.312 17:44:45 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:14:55.312 17:44:45 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:14:55.312 17:44:45 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:14:55.312 17:44:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:14:55.312 17:44:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:14:55.312 17:44:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:14:55.312 17:44:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:14:55.312 17:44:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:14:55.573 17:44:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:14:55.573 { 00:14:55.573 "name": "nvme0n1", 00:14:55.573 "aliases": [ 00:14:55.573 "5d33e0d8-2978-47d4-96d0-de7db26d5000" 00:14:55.573 ], 00:14:55.573 "product_name": "NVMe disk", 00:14:55.573 "block_size": 4096, 00:14:55.573 "num_blocks": 1310720, 00:14:55.573 "uuid": "5d33e0d8-2978-47d4-96d0-de7db26d5000", 00:14:55.573 "numa_id": -1, 00:14:55.573 "assigned_rate_limits": { 00:14:55.573 "rw_ios_per_sec": 0, 00:14:55.573 "rw_mbytes_per_sec": 0, 00:14:55.573 "r_mbytes_per_sec": 0, 00:14:55.573 "w_mbytes_per_sec": 0 00:14:55.573 }, 00:14:55.573 "claimed": false, 00:14:55.573 "zoned": false, 00:14:55.573 "supported_io_types": { 00:14:55.573 "read": true, 00:14:55.573 "write": true, 00:14:55.573 "unmap": true, 00:14:55.573 "flush": true, 00:14:55.573 "reset": true, 00:14:55.573 "nvme_admin": true, 00:14:55.573 "nvme_io": true, 00:14:55.573 "nvme_io_md": false, 00:14:55.573 "write_zeroes": true, 00:14:55.573 "zcopy": false, 00:14:55.573 "get_zone_info": false, 00:14:55.573 "zone_management": false, 00:14:55.573 "zone_append": false, 00:14:55.573 "compare": true, 00:14:55.573 "compare_and_write": false, 00:14:55.573 "abort": true, 00:14:55.573 "seek_hole": false, 00:14:55.573 "seek_data": false, 00:14:55.573 "copy": true, 00:14:55.573 "nvme_iov_md": false 00:14:55.573 }, 00:14:55.573 "driver_specific": { 00:14:55.573 "nvme": [ 00:14:55.573 { 00:14:55.573 "pci_address": "0000:00:11.0", 00:14:55.573 "trid": { 00:14:55.573 "trtype": "PCIe", 00:14:55.573 "traddr": "0000:00:11.0" 00:14:55.573 }, 00:14:55.573 "ctrlr_data": { 00:14:55.573 "cntlid": 0, 00:14:55.573 "vendor_id": "0x1b36", 00:14:55.573 "model_number": "QEMU NVMe Ctrl", 00:14:55.573 "serial_number": "12341", 00:14:55.573 "firmware_revision": "8.0.0", 00:14:55.573 "subnqn": "nqn.2019-08.org.qemu:12341", 00:14:55.573 "oacs": { 00:14:55.573 "security": 0, 00:14:55.573 "format": 1, 00:14:55.573 "firmware": 0, 00:14:55.573 "ns_manage": 1 00:14:55.573 }, 00:14:55.573 "multi_ctrlr": false, 00:14:55.573 "ana_reporting": false 00:14:55.573 }, 00:14:55.573 "vs": { 00:14:55.573 "nvme_version": "1.4" 00:14:55.573 }, 00:14:55.573 "ns_data": { 00:14:55.573 "id": 1, 00:14:55.573 "can_share": false 00:14:55.573 } 00:14:55.573 } 00:14:55.573 ], 00:14:55.573 "mp_policy": "active_passive" 00:14:55.573 } 00:14:55.573 } 00:14:55.573 ]' 00:14:55.573 17:44:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:14:55.573 17:44:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:14:55.573 17:44:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:14:55.573 17:44:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:14:55.573 17:44:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:14:55.573 17:44:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:14:55.573 17:44:45 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:14:55.573 17:44:45 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:14:55.573 17:44:45 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:14:55.573 17:44:45 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:55.573 17:44:45 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:14:55.834 17:44:45 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:14:55.834 17:44:45 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:14:56.095 17:44:45 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=967b71c8-f15f-452b-bdfc-c895d573925c 00:14:56.095 17:44:45 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 967b71c8-f15f-452b-bdfc-c895d573925c 00:14:56.095 17:44:45 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=be95c9a6-10f9-4dc9-b752-8e863d9cd89e 00:14:56.095 17:44:45 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 be95c9a6-10f9-4dc9-b752-8e863d9cd89e 00:14:56.095 17:44:45 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:14:56.095 17:44:45 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:14:56.095 17:44:45 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=be95c9a6-10f9-4dc9-b752-8e863d9cd89e 00:14:56.095 17:44:45 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:14:56.095 17:44:45 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size be95c9a6-10f9-4dc9-b752-8e863d9cd89e 00:14:56.095 17:44:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=be95c9a6-10f9-4dc9-b752-8e863d9cd89e 00:14:56.095 17:44:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:14:56.095 17:44:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:14:56.095 17:44:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:14:56.095 17:44:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b be95c9a6-10f9-4dc9-b752-8e863d9cd89e 00:14:56.355 17:44:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:14:56.355 { 00:14:56.355 "name": "be95c9a6-10f9-4dc9-b752-8e863d9cd89e", 00:14:56.355 "aliases": [ 00:14:56.355 "lvs/nvme0n1p0" 00:14:56.355 ], 00:14:56.355 "product_name": "Logical Volume", 00:14:56.355 "block_size": 4096, 00:14:56.355 "num_blocks": 26476544, 00:14:56.355 "uuid": "be95c9a6-10f9-4dc9-b752-8e863d9cd89e", 00:14:56.355 "assigned_rate_limits": { 00:14:56.355 "rw_ios_per_sec": 0, 00:14:56.355 "rw_mbytes_per_sec": 0, 00:14:56.355 "r_mbytes_per_sec": 0, 00:14:56.355 "w_mbytes_per_sec": 0 00:14:56.355 }, 00:14:56.355 "claimed": false, 00:14:56.355 "zoned": false, 00:14:56.355 "supported_io_types": { 00:14:56.355 "read": true, 00:14:56.355 "write": true, 00:14:56.355 "unmap": true, 00:14:56.355 "flush": false, 00:14:56.355 "reset": true, 00:14:56.355 "nvme_admin": false, 00:14:56.355 "nvme_io": false, 00:14:56.355 "nvme_io_md": false, 00:14:56.355 "write_zeroes": true, 00:14:56.355 "zcopy": false, 00:14:56.355 "get_zone_info": false, 00:14:56.355 "zone_management": false, 00:14:56.355 "zone_append": false, 00:14:56.355 "compare": false, 00:14:56.355 "compare_and_write": false, 00:14:56.355 "abort": false, 00:14:56.355 "seek_hole": true, 00:14:56.355 "seek_data": true, 00:14:56.355 "copy": false, 00:14:56.355 "nvme_iov_md": false 00:14:56.355 }, 00:14:56.355 "driver_specific": { 00:14:56.355 "lvol": { 00:14:56.355 "lvol_store_uuid": "967b71c8-f15f-452b-bdfc-c895d573925c", 00:14:56.355 "base_bdev": "nvme0n1", 00:14:56.355 "thin_provision": true, 00:14:56.355 "num_allocated_clusters": 0, 00:14:56.355 "snapshot": false, 00:14:56.355 "clone": false, 00:14:56.355 "esnap_clone": false 00:14:56.355 } 00:14:56.355 } 00:14:56.355 } 00:14:56.355 ]' 00:14:56.355 17:44:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:14:56.355 17:44:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:14:56.355 17:44:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:14:56.355 17:44:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:14:56.355 17:44:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:14:56.355 17:44:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:14:56.355 17:44:46 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:14:56.356 17:44:46 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:14:56.356 17:44:46 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:14:56.614 17:44:46 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:14:56.614 17:44:46 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:14:56.872 17:44:46 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size be95c9a6-10f9-4dc9-b752-8e863d9cd89e 00:14:56.872 17:44:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=be95c9a6-10f9-4dc9-b752-8e863d9cd89e 00:14:56.872 17:44:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:14:56.872 17:44:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:14:56.872 17:44:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:14:56.872 17:44:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b be95c9a6-10f9-4dc9-b752-8e863d9cd89e 00:14:56.872 17:44:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:14:56.872 { 00:14:56.872 "name": "be95c9a6-10f9-4dc9-b752-8e863d9cd89e", 00:14:56.872 "aliases": [ 00:14:56.872 "lvs/nvme0n1p0" 00:14:56.872 ], 00:14:56.872 "product_name": "Logical Volume", 00:14:56.872 "block_size": 4096, 00:14:56.872 "num_blocks": 26476544, 00:14:56.872 "uuid": "be95c9a6-10f9-4dc9-b752-8e863d9cd89e", 00:14:56.872 "assigned_rate_limits": { 00:14:56.872 "rw_ios_per_sec": 0, 00:14:56.872 "rw_mbytes_per_sec": 0, 00:14:56.872 "r_mbytes_per_sec": 0, 00:14:56.872 "w_mbytes_per_sec": 0 00:14:56.872 }, 00:14:56.872 "claimed": false, 00:14:56.872 "zoned": false, 00:14:56.872 "supported_io_types": { 00:14:56.872 "read": true, 00:14:56.872 "write": true, 00:14:56.872 "unmap": true, 00:14:56.872 "flush": false, 00:14:56.872 "reset": true, 00:14:56.872 "nvme_admin": false, 00:14:56.872 "nvme_io": false, 00:14:56.872 "nvme_io_md": false, 00:14:56.872 "write_zeroes": true, 00:14:56.872 "zcopy": false, 00:14:56.872 "get_zone_info": false, 00:14:56.872 "zone_management": false, 00:14:56.872 "zone_append": false, 00:14:56.872 "compare": false, 00:14:56.872 "compare_and_write": false, 00:14:56.872 "abort": false, 00:14:56.872 "seek_hole": true, 00:14:56.872 "seek_data": true, 00:14:56.872 "copy": false, 00:14:56.872 "nvme_iov_md": false 00:14:56.872 }, 00:14:56.872 "driver_specific": { 00:14:56.872 "lvol": { 00:14:56.872 "lvol_store_uuid": "967b71c8-f15f-452b-bdfc-c895d573925c", 00:14:56.872 "base_bdev": "nvme0n1", 00:14:56.872 "thin_provision": true, 00:14:56.872 "num_allocated_clusters": 0, 00:14:56.872 "snapshot": false, 00:14:56.872 "clone": false, 00:14:56.872 "esnap_clone": false 00:14:56.872 } 00:14:56.872 } 00:14:56.872 } 00:14:56.872 ]' 00:14:56.872 17:44:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:14:56.872 17:44:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:14:56.872 17:44:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:14:56.872 17:44:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:14:56.872 17:44:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:14:56.872 17:44:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:14:56.872 17:44:46 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:14:56.872 17:44:46 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:14:57.130 17:44:46 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:14:57.131 17:44:46 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:14:57.131 17:44:46 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:14:57.131 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:14:57.131 17:44:46 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size be95c9a6-10f9-4dc9-b752-8e863d9cd89e 00:14:57.131 17:44:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=be95c9a6-10f9-4dc9-b752-8e863d9cd89e 00:14:57.131 17:44:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:14:57.131 17:44:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:14:57.131 17:44:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:14:57.131 17:44:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b be95c9a6-10f9-4dc9-b752-8e863d9cd89e 00:14:57.389 17:44:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:14:57.389 { 00:14:57.389 "name": "be95c9a6-10f9-4dc9-b752-8e863d9cd89e", 00:14:57.389 "aliases": [ 00:14:57.389 "lvs/nvme0n1p0" 00:14:57.389 ], 00:14:57.389 "product_name": "Logical Volume", 00:14:57.389 "block_size": 4096, 00:14:57.389 "num_blocks": 26476544, 00:14:57.389 "uuid": "be95c9a6-10f9-4dc9-b752-8e863d9cd89e", 00:14:57.389 "assigned_rate_limits": { 00:14:57.389 "rw_ios_per_sec": 0, 00:14:57.389 "rw_mbytes_per_sec": 0, 00:14:57.389 "r_mbytes_per_sec": 0, 00:14:57.389 "w_mbytes_per_sec": 0 00:14:57.389 }, 00:14:57.389 "claimed": false, 00:14:57.389 "zoned": false, 00:14:57.389 "supported_io_types": { 00:14:57.389 "read": true, 00:14:57.389 "write": true, 00:14:57.389 "unmap": true, 00:14:57.389 "flush": false, 00:14:57.389 "reset": true, 00:14:57.389 "nvme_admin": false, 00:14:57.389 "nvme_io": false, 00:14:57.389 "nvme_io_md": false, 00:14:57.389 "write_zeroes": true, 00:14:57.389 "zcopy": false, 00:14:57.389 "get_zone_info": false, 00:14:57.389 "zone_management": false, 00:14:57.389 "zone_append": false, 00:14:57.389 "compare": false, 00:14:57.389 "compare_and_write": false, 00:14:57.389 "abort": false, 00:14:57.389 "seek_hole": true, 00:14:57.389 "seek_data": true, 00:14:57.389 "copy": false, 00:14:57.389 "nvme_iov_md": false 00:14:57.389 }, 00:14:57.389 "driver_specific": { 00:14:57.389 "lvol": { 00:14:57.389 "lvol_store_uuid": "967b71c8-f15f-452b-bdfc-c895d573925c", 00:14:57.389 "base_bdev": "nvme0n1", 00:14:57.389 "thin_provision": true, 00:14:57.389 "num_allocated_clusters": 0, 00:14:57.389 "snapshot": false, 00:14:57.389 "clone": false, 00:14:57.389 "esnap_clone": false 00:14:57.389 } 00:14:57.389 } 00:14:57.389 } 00:14:57.389 ]' 00:14:57.389 17:44:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:14:57.389 17:44:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:14:57.389 17:44:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:14:57.389 17:44:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:14:57.389 17:44:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:14:57.389 17:44:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:14:57.389 17:44:47 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:14:57.389 17:44:47 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:14:57.389 17:44:47 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d be95c9a6-10f9-4dc9-b752-8e863d9cd89e -c nvc0n1p0 --l2p_dram_limit 60 00:14:57.649 [2024-10-13 17:44:47.331526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:57.649 [2024-10-13 17:44:47.331671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:14:57.649 [2024-10-13 17:44:47.331692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:14:57.649 [2024-10-13 17:44:47.331699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.649 [2024-10-13 17:44:47.331771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:57.649 [2024-10-13 17:44:47.331779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:57.649 [2024-10-13 17:44:47.331788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:14:57.649 [2024-10-13 17:44:47.331796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.649 [2024-10-13 17:44:47.331832] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:14:57.649 [2024-10-13 17:44:47.332452] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:14:57.649 [2024-10-13 17:44:47.332472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:57.649 [2024-10-13 17:44:47.332481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:57.649 [2024-10-13 17:44:47.332489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.656 ms 00:14:57.649 [2024-10-13 17:44:47.332495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.649 [2024-10-13 17:44:47.332568] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1e27fea4-d993-44ef-8e36-43e00fa19832 00:14:57.649 [2024-10-13 17:44:47.333855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:57.649 [2024-10-13 17:44:47.333885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:14:57.649 [2024-10-13 17:44:47.333893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:14:57.649 [2024-10-13 17:44:47.333903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.649 [2024-10-13 17:44:47.340773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:57.649 [2024-10-13 17:44:47.340802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:57.649 [2024-10-13 17:44:47.340810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.805 ms 00:14:57.649 [2024-10-13 17:44:47.340817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.649 [2024-10-13 17:44:47.340911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:57.649 [2024-10-13 17:44:47.340919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:57.649 [2024-10-13 17:44:47.340928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:14:57.649 [2024-10-13 17:44:47.340939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.649 [2024-10-13 17:44:47.340987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:57.649 [2024-10-13 17:44:47.340996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:14:57.649 [2024-10-13 17:44:47.341002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:14:57.649 [2024-10-13 17:44:47.341010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.649 [2024-10-13 17:44:47.341037] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:14:57.649 [2024-10-13 17:44:47.344301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:57.649 [2024-10-13 17:44:47.344323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:57.649 [2024-10-13 17:44:47.344334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.268 ms 00:14:57.649 [2024-10-13 17:44:47.344341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.649 [2024-10-13 17:44:47.344380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:57.649 [2024-10-13 17:44:47.344389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:14:57.649 [2024-10-13 17:44:47.344397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:14:57.649 [2024-10-13 17:44:47.344404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.649 [2024-10-13 17:44:47.344430] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:14:57.649 [2024-10-13 17:44:47.344550] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:14:57.649 [2024-10-13 17:44:47.344575] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:14:57.649 [2024-10-13 17:44:47.344584] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:14:57.649 [2024-10-13 17:44:47.344594] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:14:57.649 [2024-10-13 17:44:47.344601] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:14:57.649 [2024-10-13 17:44:47.344609] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:14:57.649 [2024-10-13 17:44:47.344616] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:14:57.649 [2024-10-13 17:44:47.344623] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:14:57.649 [2024-10-13 17:44:47.344629] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:14:57.649 [2024-10-13 17:44:47.344636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:57.649 [2024-10-13 17:44:47.344642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:14:57.649 [2024-10-13 17:44:47.344649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:14:57.649 [2024-10-13 17:44:47.344658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.649 [2024-10-13 17:44:47.344727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:57.649 [2024-10-13 17:44:47.344734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:14:57.649 [2024-10-13 17:44:47.344742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:14:57.649 [2024-10-13 17:44:47.344748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.649 [2024-10-13 17:44:47.344836] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:14:57.649 [2024-10-13 17:44:47.344846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:14:57.649 [2024-10-13 17:44:47.344854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:57.649 [2024-10-13 17:44:47.344860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:57.649 [2024-10-13 17:44:47.344869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:14:57.649 [2024-10-13 17:44:47.344874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:14:57.649 [2024-10-13 17:44:47.344880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:14:57.649 [2024-10-13 17:44:47.344885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:14:57.649 [2024-10-13 17:44:47.344892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:14:57.649 [2024-10-13 17:44:47.344896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:57.649 [2024-10-13 17:44:47.344903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:14:57.649 [2024-10-13 17:44:47.344908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:14:57.649 [2024-10-13 17:44:47.344914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:57.649 [2024-10-13 17:44:47.344919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:14:57.649 [2024-10-13 17:44:47.344926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:14:57.649 [2024-10-13 17:44:47.344931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:57.649 [2024-10-13 17:44:47.344940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:14:57.649 [2024-10-13 17:44:47.344945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:14:57.649 [2024-10-13 17:44:47.344952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:57.649 [2024-10-13 17:44:47.344957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:14:57.650 [2024-10-13 17:44:47.344963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:14:57.650 [2024-10-13 17:44:47.344968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:14:57.650 [2024-10-13 17:44:47.344975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:14:57.650 [2024-10-13 17:44:47.344979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:14:57.650 [2024-10-13 17:44:47.344990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:14:57.650 [2024-10-13 17:44:47.344995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:14:57.650 [2024-10-13 17:44:47.345002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:14:57.650 [2024-10-13 17:44:47.345007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:14:57.650 [2024-10-13 17:44:47.345014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:14:57.650 [2024-10-13 17:44:47.345019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:14:57.650 [2024-10-13 17:44:47.345025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:14:57.650 [2024-10-13 17:44:47.345031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:14:57.650 [2024-10-13 17:44:47.345039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:14:57.650 [2024-10-13 17:44:47.345044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:57.650 [2024-10-13 17:44:47.345050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:14:57.650 [2024-10-13 17:44:47.345067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:14:57.650 [2024-10-13 17:44:47.345073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:57.650 [2024-10-13 17:44:47.345078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:14:57.650 [2024-10-13 17:44:47.345085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:14:57.650 [2024-10-13 17:44:47.345090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:57.650 [2024-10-13 17:44:47.345096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:14:57.650 [2024-10-13 17:44:47.345101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:14:57.650 [2024-10-13 17:44:47.345108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:57.650 [2024-10-13 17:44:47.345113] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:14:57.650 [2024-10-13 17:44:47.345120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:14:57.650 [2024-10-13 17:44:47.345126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:57.650 [2024-10-13 17:44:47.345133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:57.650 [2024-10-13 17:44:47.345138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:14:57.650 [2024-10-13 17:44:47.345146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:14:57.650 [2024-10-13 17:44:47.345151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:14:57.650 [2024-10-13 17:44:47.345158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:14:57.650 [2024-10-13 17:44:47.345163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:14:57.650 [2024-10-13 17:44:47.345169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:14:57.650 [2024-10-13 17:44:47.345177] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:14:57.650 [2024-10-13 17:44:47.345186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:57.650 [2024-10-13 17:44:47.345192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:14:57.650 [2024-10-13 17:44:47.345203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:14:57.650 [2024-10-13 17:44:47.345208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:14:57.650 [2024-10-13 17:44:47.345215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:14:57.650 [2024-10-13 17:44:47.345221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:14:57.650 [2024-10-13 17:44:47.345228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:14:57.650 [2024-10-13 17:44:47.345234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:14:57.650 [2024-10-13 17:44:47.345241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:14:57.650 [2024-10-13 17:44:47.345246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:14:57.650 [2024-10-13 17:44:47.345254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:14:57.650 [2024-10-13 17:44:47.345260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:14:57.650 [2024-10-13 17:44:47.345267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:14:57.650 [2024-10-13 17:44:47.345273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:14:57.650 [2024-10-13 17:44:47.345280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:14:57.650 [2024-10-13 17:44:47.345285] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:14:57.650 [2024-10-13 17:44:47.345293] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:57.650 [2024-10-13 17:44:47.345299] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:14:57.650 [2024-10-13 17:44:47.345306] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:14:57.650 [2024-10-13 17:44:47.345312] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:14:57.650 [2024-10-13 17:44:47.345318] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:14:57.650 [2024-10-13 17:44:47.345324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:57.650 [2024-10-13 17:44:47.345331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:14:57.650 [2024-10-13 17:44:47.345340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:14:57.650 [2024-10-13 17:44:47.345346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.650 [2024-10-13 17:44:47.345416] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:14:57.650 [2024-10-13 17:44:47.345432] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:00.220 [2024-10-13 17:44:49.585758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:00.220 [2024-10-13 17:44:49.585822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:00.221 [2024-10-13 17:44:49.585838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2240.333 ms 00:15:00.221 [2024-10-13 17:44:49.585853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:00.221 [2024-10-13 17:44:49.614127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:00.221 [2024-10-13 17:44:49.614316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:00.221 [2024-10-13 17:44:49.614338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.051 ms 00:15:00.221 [2024-10-13 17:44:49.614349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:00.221 [2024-10-13 17:44:49.614492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:00.221 [2024-10-13 17:44:49.614506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:00.221 [2024-10-13 17:44:49.614514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:15:00.221 [2024-10-13 17:44:49.614526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:00.221 [2024-10-13 17:44:49.659518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:00.221 [2024-10-13 17:44:49.659583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:00.221 [2024-10-13 17:44:49.659598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.912 ms 00:15:00.221 [2024-10-13 17:44:49.659609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:00.221 [2024-10-13 17:44:49.659656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:00.221 [2024-10-13 17:44:49.659669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:00.221 [2024-10-13 17:44:49.659679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:00.221 [2024-10-13 17:44:49.659710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:00.221 [2024-10-13 17:44:49.660174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:00.221 [2024-10-13 17:44:49.660209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:00.221 [2024-10-13 17:44:49.660220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.393 ms 00:15:00.221 [2024-10-13 17:44:49.660231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:00.221 [2024-10-13 17:44:49.660377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:00.221 [2024-10-13 17:44:49.660393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:00.221 [2024-10-13 17:44:49.660403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:15:00.221 [2024-10-13 17:44:49.660415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:00.221 [2024-10-13 17:44:49.677647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:00.221 [2024-10-13 17:44:49.677678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:00.221 [2024-10-13 17:44:49.677688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.204 ms 00:15:00.221 [2024-10-13 17:44:49.677698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:00.221 [2024-10-13 17:44:49.690105] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:00.221 [2024-10-13 17:44:49.707468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:00.221 [2024-10-13 17:44:49.707643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:00.221 [2024-10-13 17:44:49.707664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.671 ms 00:15:00.221 [2024-10-13 17:44:49.707673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:00.221 [2024-10-13 17:44:49.758469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:00.221 [2024-10-13 17:44:49.758507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:00.221 [2024-10-13 17:44:49.758523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.759 ms 00:15:00.221 [2024-10-13 17:44:49.758532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:00.221 [2024-10-13 17:44:49.758750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:00.221 [2024-10-13 17:44:49.758766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:00.221 [2024-10-13 17:44:49.758781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:15:00.221 [2024-10-13 17:44:49.758788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:00.221 [2024-10-13 17:44:49.781937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:00.221 [2024-10-13 17:44:49.782071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:00.221 [2024-10-13 17:44:49.782092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.091 ms 00:15:00.221 [2024-10-13 17:44:49.782104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:00.221 [2024-10-13 17:44:49.804570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:00.221 [2024-10-13 17:44:49.804690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:00.221 [2024-10-13 17:44:49.804710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.425 ms 00:15:00.221 [2024-10-13 17:44:49.804718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:00.221 [2024-10-13 17:44:49.805297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:00.221 [2024-10-13 17:44:49.805315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:00.221 [2024-10-13 17:44:49.805329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:15:00.221 [2024-10-13 17:44:49.805336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:00.221 [2024-10-13 17:44:49.868678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:00.221 [2024-10-13 17:44:49.868712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:00.221 [2024-10-13 17:44:49.868729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.297 ms 00:15:00.221 [2024-10-13 17:44:49.868737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:00.221 [2024-10-13 17:44:49.893171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:00.221 [2024-10-13 17:44:49.893203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:00.221 [2024-10-13 17:44:49.893217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.349 ms 00:15:00.221 [2024-10-13 17:44:49.893225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:00.221 [2024-10-13 17:44:49.915701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:00.221 [2024-10-13 17:44:49.915827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:00.221 [2024-10-13 17:44:49.915847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.429 ms 00:15:00.221 [2024-10-13 17:44:49.915855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:00.221 [2024-10-13 17:44:49.939236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:00.221 [2024-10-13 17:44:49.939352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:00.221 [2024-10-13 17:44:49.939371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.344 ms 00:15:00.221 [2024-10-13 17:44:49.939379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:00.221 [2024-10-13 17:44:49.939424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:00.221 [2024-10-13 17:44:49.939434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:00.221 [2024-10-13 17:44:49.939447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:00.221 [2024-10-13 17:44:49.939455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:00.221 [2024-10-13 17:44:49.939554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:00.221 [2024-10-13 17:44:49.939580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:00.221 [2024-10-13 17:44:49.939590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:15:00.221 [2024-10-13 17:44:49.939598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:00.221 [2024-10-13 17:44:49.940631] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2608.598 ms, result 0 00:15:00.221 { 00:15:00.221 "name": "ftl0", 00:15:00.221 "uuid": "1e27fea4-d993-44ef-8e36-43e00fa19832" 00:15:00.221 } 00:15:00.221 17:44:49 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:00.221 17:44:49 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:15:00.221 17:44:49 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:00.221 17:44:49 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:15:00.221 17:44:49 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:00.221 17:44:49 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:00.221 17:44:49 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:00.479 17:44:50 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:00.737 [ 00:15:00.737 { 00:15:00.737 "name": "ftl0", 00:15:00.737 "aliases": [ 00:15:00.737 "1e27fea4-d993-44ef-8e36-43e00fa19832" 00:15:00.737 ], 00:15:00.737 "product_name": "FTL disk", 00:15:00.737 "block_size": 4096, 00:15:00.737 "num_blocks": 20971520, 00:15:00.737 "uuid": "1e27fea4-d993-44ef-8e36-43e00fa19832", 00:15:00.737 "assigned_rate_limits": { 00:15:00.737 "rw_ios_per_sec": 0, 00:15:00.737 "rw_mbytes_per_sec": 0, 00:15:00.737 "r_mbytes_per_sec": 0, 00:15:00.737 "w_mbytes_per_sec": 0 00:15:00.737 }, 00:15:00.737 "claimed": false, 00:15:00.737 "zoned": false, 00:15:00.737 "supported_io_types": { 00:15:00.737 "read": true, 00:15:00.737 "write": true, 00:15:00.737 "unmap": true, 00:15:00.737 "flush": true, 00:15:00.737 "reset": false, 00:15:00.737 "nvme_admin": false, 00:15:00.737 "nvme_io": false, 00:15:00.737 "nvme_io_md": false, 00:15:00.737 "write_zeroes": true, 00:15:00.738 "zcopy": false, 00:15:00.738 "get_zone_info": false, 00:15:00.738 "zone_management": false, 00:15:00.738 "zone_append": false, 00:15:00.738 "compare": false, 00:15:00.738 "compare_and_write": false, 00:15:00.738 "abort": false, 00:15:00.738 "seek_hole": false, 00:15:00.738 "seek_data": false, 00:15:00.738 "copy": false, 00:15:00.738 "nvme_iov_md": false 00:15:00.738 }, 00:15:00.738 "driver_specific": { 00:15:00.738 "ftl": { 00:15:00.738 "base_bdev": "be95c9a6-10f9-4dc9-b752-8e863d9cd89e", 00:15:00.738 "cache": "nvc0n1p0" 00:15:00.738 } 00:15:00.738 } 00:15:00.738 } 00:15:00.738 ] 00:15:00.738 17:44:50 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:15:00.738 17:44:50 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:00.738 17:44:50 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:00.996 17:44:50 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:00.996 17:44:50 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:00.996 [2024-10-13 17:44:50.757520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:00.996 [2024-10-13 17:44:50.757584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:00.996 [2024-10-13 17:44:50.757599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:00.996 [2024-10-13 17:44:50.757611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:00.996 [2024-10-13 17:44:50.757650] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:00.996 [2024-10-13 17:44:50.760419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:00.996 [2024-10-13 17:44:50.760452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:00.996 [2024-10-13 17:44:50.760465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.750 ms 00:15:00.996 [2024-10-13 17:44:50.760473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:00.996 [2024-10-13 17:44:50.760929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:00.996 [2024-10-13 17:44:50.760950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:00.996 [2024-10-13 17:44:50.760961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:15:00.996 [2024-10-13 17:44:50.760969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:00.996 [2024-10-13 17:44:50.764213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:00.996 [2024-10-13 17:44:50.764240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:00.996 [2024-10-13 17:44:50.764259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.213 ms 00:15:00.996 [2024-10-13 17:44:50.764269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:00.996 [2024-10-13 17:44:50.770422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:00.996 [2024-10-13 17:44:50.770550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:00.996 [2024-10-13 17:44:50.770581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.114 ms 00:15:00.996 [2024-10-13 17:44:50.770589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:00.996 [2024-10-13 17:44:50.794769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:00.996 [2024-10-13 17:44:50.794800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:00.996 [2024-10-13 17:44:50.794814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.093 ms 00:15:00.996 [2024-10-13 17:44:50.794823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:00.996 [2024-10-13 17:44:50.809667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.256 [2024-10-13 17:44:50.809790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:01.256 [2024-10-13 17:44:50.809812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.785 ms 00:15:01.256 [2024-10-13 17:44:50.809820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.256 [2024-10-13 17:44:50.810015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.256 [2024-10-13 17:44:50.810029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:01.256 [2024-10-13 17:44:50.810039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:15:01.256 [2024-10-13 17:44:50.810047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.256 [2024-10-13 17:44:50.832853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.256 [2024-10-13 17:44:50.832963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:01.256 [2024-10-13 17:44:50.832982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.779 ms 00:15:01.256 [2024-10-13 17:44:50.832990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.256 [2024-10-13 17:44:50.855667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.256 [2024-10-13 17:44:50.855726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:01.256 [2024-10-13 17:44:50.855740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.541 ms 00:15:01.256 [2024-10-13 17:44:50.855747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.256 [2024-10-13 17:44:50.877369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.256 [2024-10-13 17:44:50.877399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:01.256 [2024-10-13 17:44:50.877411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.572 ms 00:15:01.256 [2024-10-13 17:44:50.877418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.256 [2024-10-13 17:44:50.899460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.256 [2024-10-13 17:44:50.899577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:01.256 [2024-10-13 17:44:50.899596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.941 ms 00:15:01.256 [2024-10-13 17:44:50.899603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.256 [2024-10-13 17:44:50.899644] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:01.256 [2024-10-13 17:44:50.899658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:01.256 [2024-10-13 17:44:50.899670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:01.256 [2024-10-13 17:44:50.899677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:01.256 [2024-10-13 17:44:50.899687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:01.256 [2024-10-13 17:44:50.899695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:01.256 [2024-10-13 17:44:50.899704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:01.256 [2024-10-13 17:44:50.899711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:01.256 [2024-10-13 17:44:50.899723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:01.256 [2024-10-13 17:44:50.899730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:01.256 [2024-10-13 17:44:50.899740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:01.256 [2024-10-13 17:44:50.899747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:01.256 [2024-10-13 17:44:50.899757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:01.256 [2024-10-13 17:44:50.899765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:01.256 [2024-10-13 17:44:50.899775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:01.256 [2024-10-13 17:44:50.899782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:01.256 [2024-10-13 17:44:50.899791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:01.256 [2024-10-13 17:44:50.899799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:01.256 [2024-10-13 17:44:50.899807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:01.256 [2024-10-13 17:44:50.899815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:01.256 [2024-10-13 17:44:50.899824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:01.256 [2024-10-13 17:44:50.899832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:01.256 [2024-10-13 17:44:50.899842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:01.256 [2024-10-13 17:44:50.899849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:01.256 [2024-10-13 17:44:50.899860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:01.256 [2024-10-13 17:44:50.899868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:01.256 [2024-10-13 17:44:50.899877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:01.256 [2024-10-13 17:44:50.899885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:01.256 [2024-10-13 17:44:50.899895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:01.256 [2024-10-13 17:44:50.899903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:01.256 [2024-10-13 17:44:50.899915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:01.256 [2024-10-13 17:44:50.899923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:01.256 [2024-10-13 17:44:50.899932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:01.256 [2024-10-13 17:44:50.899939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:01.256 [2024-10-13 17:44:50.899948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:01.256 [2024-10-13 17:44:50.899956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.899965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.899972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.899982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.899989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:01.257 [2024-10-13 17:44:50.900620] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:01.257 [2024-10-13 17:44:50.900634] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1e27fea4-d993-44ef-8e36-43e00fa19832 00:15:01.257 [2024-10-13 17:44:50.900642] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:01.257 [2024-10-13 17:44:50.900653] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:01.257 [2024-10-13 17:44:50.900660] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:01.257 [2024-10-13 17:44:50.900670] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:01.257 [2024-10-13 17:44:50.900677] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:01.257 [2024-10-13 17:44:50.900687] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:01.257 [2024-10-13 17:44:50.900696] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:01.257 [2024-10-13 17:44:50.900703] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:01.257 [2024-10-13 17:44:50.900709] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:01.257 [2024-10-13 17:44:50.900719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.257 [2024-10-13 17:44:50.900727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:01.257 [2024-10-13 17:44:50.900737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.077 ms 00:15:01.257 [2024-10-13 17:44:50.900744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.257 [2024-10-13 17:44:50.913726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.257 [2024-10-13 17:44:50.913753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:01.257 [2024-10-13 17:44:50.913765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.940 ms 00:15:01.257 [2024-10-13 17:44:50.913775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.257 [2024-10-13 17:44:50.914145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.257 [2024-10-13 17:44:50.914156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:01.257 [2024-10-13 17:44:50.914167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:15:01.257 [2024-10-13 17:44:50.914175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.257 [2024-10-13 17:44:50.959686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:01.257 [2024-10-13 17:44:50.959721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:01.257 [2024-10-13 17:44:50.959733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:01.257 [2024-10-13 17:44:50.959744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.257 [2024-10-13 17:44:50.959814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:01.257 [2024-10-13 17:44:50.959822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:01.257 [2024-10-13 17:44:50.959832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:01.257 [2024-10-13 17:44:50.959839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.257 [2024-10-13 17:44:50.959937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:01.257 [2024-10-13 17:44:50.959948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:01.257 [2024-10-13 17:44:50.959958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:01.257 [2024-10-13 17:44:50.959965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.257 [2024-10-13 17:44:50.959999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:01.257 [2024-10-13 17:44:50.960007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:01.257 [2024-10-13 17:44:50.960016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:01.258 [2024-10-13 17:44:50.960024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.258 [2024-10-13 17:44:51.044137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:01.258 [2024-10-13 17:44:51.044178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:01.258 [2024-10-13 17:44:51.044191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:01.258 [2024-10-13 17:44:51.044203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.516 [2024-10-13 17:44:51.109232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:01.516 [2024-10-13 17:44:51.109270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:01.516 [2024-10-13 17:44:51.109283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:01.516 [2024-10-13 17:44:51.109291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.516 [2024-10-13 17:44:51.109373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:01.516 [2024-10-13 17:44:51.109383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:01.516 [2024-10-13 17:44:51.109393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:01.516 [2024-10-13 17:44:51.109400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.516 [2024-10-13 17:44:51.109498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:01.516 [2024-10-13 17:44:51.109512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:01.516 [2024-10-13 17:44:51.109522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:01.516 [2024-10-13 17:44:51.109530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.516 [2024-10-13 17:44:51.109664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:01.516 [2024-10-13 17:44:51.109676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:01.516 [2024-10-13 17:44:51.109686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:01.516 [2024-10-13 17:44:51.109694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.516 [2024-10-13 17:44:51.109748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:01.516 [2024-10-13 17:44:51.109760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:01.516 [2024-10-13 17:44:51.109773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:01.516 [2024-10-13 17:44:51.109780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.516 [2024-10-13 17:44:51.109833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:01.516 [2024-10-13 17:44:51.109842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:01.516 [2024-10-13 17:44:51.109852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:01.516 [2024-10-13 17:44:51.109860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.516 [2024-10-13 17:44:51.109924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:01.517 [2024-10-13 17:44:51.109936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:01.517 [2024-10-13 17:44:51.109945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:01.517 [2024-10-13 17:44:51.109952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.517 [2024-10-13 17:44:51.110123] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 352.586 ms, result 0 00:15:01.517 true 00:15:01.517 17:44:51 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 72797 00:15:01.517 17:44:51 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 72797 ']' 00:15:01.517 17:44:51 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 72797 00:15:01.517 17:44:51 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:15:01.517 17:44:51 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:01.517 17:44:51 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72797 00:15:01.517 killing process with pid 72797 00:15:01.517 17:44:51 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:01.517 17:44:51 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:01.517 17:44:51 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72797' 00:15:01.517 17:44:51 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 72797 00:15:01.517 17:44:51 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 72797 00:15:08.078 17:44:57 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:08.078 17:44:57 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:08.078 17:44:57 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:08.078 17:44:57 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:08.078 17:44:57 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:08.078 17:44:57 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:08.078 17:44:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:08.078 17:44:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:08.078 17:44:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:08.078 17:44:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:08.078 17:44:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:08.078 17:44:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:08.078 17:44:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:08.078 17:44:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:08.078 17:44:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:08.078 17:44:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:08.078 17:44:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:08.078 17:44:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:08.078 17:44:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:08.078 17:44:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:08.078 17:44:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:08.078 17:44:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:08.338 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:08.338 fio-3.35 00:15:08.338 Starting 1 thread 00:15:12.541 00:15:12.541 test: (groupid=0, jobs=1): err= 0: pid=72977: Sun Oct 13 17:45:01 2024 00:15:12.541 read: IOPS=1295, BW=86.0MiB/s (90.2MB/s)(255MiB/2959msec) 00:15:12.541 slat (nsec): min=2982, max=25711, avg=3921.50, stdev=1757.12 00:15:12.541 clat (usec): min=250, max=972, avg=348.12, stdev=73.30 00:15:12.541 lat (usec): min=256, max=977, avg=352.04, stdev=73.71 00:15:12.541 clat percentiles (usec): 00:15:12.541 | 1.00th=[ 293], 5.00th=[ 314], 10.00th=[ 314], 20.00th=[ 318], 00:15:12.541 | 30.00th=[ 318], 40.00th=[ 318], 50.00th=[ 322], 60.00th=[ 322], 00:15:12.542 | 70.00th=[ 330], 80.00th=[ 343], 90.00th=[ 445], 95.00th=[ 494], 00:15:12.542 | 99.00th=[ 668], 99.50th=[ 758], 99.90th=[ 857], 99.95th=[ 971], 00:15:12.542 | 99.99th=[ 971] 00:15:12.542 write: IOPS=1304, BW=86.6MiB/s (90.8MB/s)(256MiB/2956msec); 0 zone resets 00:15:12.542 slat (nsec): min=13695, max=85222, avg=17171.96, stdev=3905.28 00:15:12.542 clat (usec): min=289, max=1012, avg=386.96, stdev=98.84 00:15:12.542 lat (usec): min=311, max=1037, avg=404.13, stdev=99.01 00:15:12.542 clat percentiles (usec): 00:15:12.542 | 1.00th=[ 330], 5.00th=[ 338], 10.00th=[ 338], 20.00th=[ 343], 00:15:12.542 | 30.00th=[ 343], 40.00th=[ 347], 50.00th=[ 347], 60.00th=[ 351], 00:15:12.542 | 70.00th=[ 363], 80.00th=[ 408], 90.00th=[ 482], 95.00th=[ 619], 00:15:12.542 | 99.00th=[ 807], 99.50th=[ 889], 99.90th=[ 1004], 99.95th=[ 1004], 00:15:12.542 | 99.99th=[ 1012] 00:15:12.542 bw ( KiB/s): min=85680, max=89896, per=99.78%, avg=88508.80, stdev=1815.49, samples=5 00:15:12.542 iops : min= 1260, max= 1322, avg=1301.60, stdev=26.70, samples=5 00:15:12.542 lat (usec) : 500=93.25%, 750=5.51%, 1000=1.18% 00:15:12.542 lat (msec) : 2=0.05% 00:15:12.542 cpu : usr=99.26%, sys=0.17%, ctx=4, majf=0, minf=1169 00:15:12.542 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:12.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.542 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.542 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:12.542 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:12.542 00:15:12.542 Run status group 0 (all jobs): 00:15:12.542 READ: bw=86.0MiB/s (90.2MB/s), 86.0MiB/s-86.0MiB/s (90.2MB/s-90.2MB/s), io=255MiB (267MB), run=2959-2959msec 00:15:12.542 WRITE: bw=86.6MiB/s (90.8MB/s), 86.6MiB/s-86.6MiB/s (90.8MB/s-90.8MB/s), io=256MiB (269MB), run=2956-2956msec 00:15:13.927 ----------------------------------------------------- 00:15:13.927 Suppressions used: 00:15:13.927 count bytes template 00:15:13.927 1 5 /usr/src/fio/parse.c 00:15:13.927 1 8 libtcmalloc_minimal.so 00:15:13.927 1 904 libcrypto.so 00:15:13.927 ----------------------------------------------------- 00:15:13.927 00:15:13.927 17:45:03 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:13.927 17:45:03 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:13.927 17:45:03 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:13.927 17:45:03 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:13.927 17:45:03 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:13.927 17:45:03 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:13.927 17:45:03 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:13.927 17:45:03 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:13.927 17:45:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:13.927 17:45:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:13.927 17:45:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:13.927 17:45:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:13.927 17:45:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:13.927 17:45:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:13.927 17:45:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:13.927 17:45:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:13.927 17:45:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:13.927 17:45:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:13.927 17:45:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:13.927 17:45:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:13.927 17:45:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:13.927 17:45:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:13.927 17:45:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:13.927 17:45:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:14.188 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:14.188 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:14.188 fio-3.35 00:15:14.188 Starting 2 threads 00:15:46.264 00:15:46.264 first_half: (groupid=0, jobs=1): err= 0: pid=73070: Sun Oct 13 17:45:33 2024 00:15:46.264 read: IOPS=2285, BW=9141KiB/s (9360kB/s)(255MiB/28530msec) 00:15:46.264 slat (usec): min=3, max=113, avg= 4.48, stdev= 1.28 00:15:46.264 clat (usec): min=1117, max=410116, avg=37598.44, stdev=24449.07 00:15:46.264 lat (usec): min=1122, max=410125, avg=37602.92, stdev=24449.22 00:15:46.264 clat percentiles (msec): 00:15:46.264 | 1.00th=[ 7], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 31], 00:15:46.264 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 34], 00:15:46.264 | 70.00th=[ 37], 80.00th=[ 41], 90.00th=[ 47], 95.00th=[ 55], 00:15:46.264 | 99.00th=[ 146], 99.50th=[ 226], 99.90th=[ 326], 99.95th=[ 363], 00:15:46.264 | 99.99th=[ 401] 00:15:46.264 write: IOPS=3149, BW=12.3MiB/s (12.9MB/s)(256MiB/20807msec); 0 zone resets 00:15:46.264 slat (usec): min=3, max=889, avg= 7.01, stdev= 9.40 00:15:46.264 clat (usec): min=376, max=148436, avg=18300.04, stdev=33710.39 00:15:46.264 lat (usec): min=387, max=148446, avg=18307.04, stdev=33710.40 00:15:46.264 clat percentiles (usec): 00:15:46.264 | 1.00th=[ 1156], 5.00th=[ 1582], 10.00th=[ 1827], 20.00th=[ 2212], 00:15:46.264 | 30.00th=[ 2606], 40.00th=[ 3326], 50.00th=[ 5342], 60.00th=[ 7701], 00:15:46.264 | 70.00th=[ 10028], 80.00th=[ 14746], 90.00th=[ 94897], 95.00th=[110625], 00:15:46.264 | 99.00th=[127402], 99.50th=[135267], 99.90th=[139461], 99.95th=[141558], 00:15:46.264 | 99.99th=[145753] 00:15:46.264 bw ( KiB/s): min= 1832, max=40856, per=89.16%, avg=19418.07, stdev=9585.17, samples=27 00:15:46.264 iops : min= 458, max=10214, avg=4854.52, stdev=2396.29, samples=27 00:15:46.264 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.23% 00:15:46.264 lat (msec) : 2=7.12%, 4=14.80%, 10=13.60%, 20=8.79%, 50=45.60% 00:15:46.264 lat (msec) : 100=4.10%, 250=5.52%, 500=0.19% 00:15:46.264 cpu : usr=99.31%, sys=0.16%, ctx=39, majf=0, minf=5513 00:15:46.264 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:46.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.264 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:46.264 issued rwts: total=65196,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:46.264 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:46.264 second_half: (groupid=0, jobs=1): err= 0: pid=73071: Sun Oct 13 17:45:33 2024 00:15:46.264 read: IOPS=2271, BW=9086KiB/s (9304kB/s)(255MiB/28709msec) 00:15:46.264 slat (nsec): min=3017, max=42329, avg=5096.03, stdev=1279.34 00:15:46.264 clat (usec): min=901, max=376555, avg=36939.02, stdev=23426.81 00:15:46.264 lat (usec): min=907, max=376559, avg=36944.12, stdev=23426.87 00:15:46.264 clat percentiles (msec): 00:15:46.264 | 1.00th=[ 8], 5.00th=[ 28], 10.00th=[ 30], 20.00th=[ 31], 00:15:46.264 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 34], 00:15:46.264 | 70.00th=[ 36], 80.00th=[ 40], 90.00th=[ 45], 95.00th=[ 54], 00:15:46.264 | 99.00th=[ 169], 99.50th=[ 215], 99.90th=[ 264], 99.95th=[ 288], 00:15:46.264 | 99.99th=[ 342] 00:15:46.264 write: IOPS=2722, BW=10.6MiB/s (11.2MB/s)(256MiB/24072msec); 0 zone resets 00:15:46.264 slat (usec): min=3, max=1156, avg= 6.99, stdev= 9.18 00:15:46.264 clat (usec): min=382, max=148798, avg=19296.42, stdev=34489.87 00:15:46.264 lat (usec): min=389, max=148811, avg=19303.41, stdev=34490.20 00:15:46.264 clat percentiles (usec): 00:15:46.264 | 1.00th=[ 947], 5.00th=[ 1434], 10.00th=[ 1680], 20.00th=[ 2089], 00:15:46.264 | 30.00th=[ 2606], 40.00th=[ 3261], 50.00th=[ 4490], 60.00th=[ 6259], 00:15:46.264 | 70.00th=[ 11994], 80.00th=[ 17171], 90.00th=[ 96994], 95.00th=[111674], 00:15:46.264 | 99.00th=[130548], 99.50th=[137364], 99.90th=[143655], 99.95th=[145753], 00:15:46.264 | 99.99th=[147850] 00:15:46.264 bw ( KiB/s): min= 48, max=41152, per=70.79%, avg=15418.59, stdev=11499.79, samples=34 00:15:46.264 iops : min= 12, max=10288, avg=3854.65, stdev=2874.95, samples=34 00:15:46.264 lat (usec) : 500=0.01%, 750=0.13%, 1000=0.48% 00:15:46.264 lat (msec) : 2=8.34%, 4=14.46%, 10=10.60%, 20=10.31%, 50=45.74% 00:15:46.264 lat (msec) : 100=4.05%, 250=5.79%, 500=0.11% 00:15:46.264 cpu : usr=99.15%, sys=0.12%, ctx=69, majf=0, minf=5586 00:15:46.264 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:46.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.264 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:46.264 issued rwts: total=65212,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:46.264 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:46.264 00:15:46.264 Run status group 0 (all jobs): 00:15:46.264 READ: bw=17.7MiB/s (18.6MB/s), 9086KiB/s-9141KiB/s (9304kB/s-9360kB/s), io=509MiB (534MB), run=28530-28709msec 00:15:46.264 WRITE: bw=21.3MiB/s (22.3MB/s), 10.6MiB/s-12.3MiB/s (11.2MB/s-12.9MB/s), io=512MiB (537MB), run=20807-24072msec 00:15:46.264 ----------------------------------------------------- 00:15:46.264 Suppressions used: 00:15:46.264 count bytes template 00:15:46.264 2 10 /usr/src/fio/parse.c 00:15:46.264 2 192 /usr/src/fio/iolog.c 00:15:46.264 1 8 libtcmalloc_minimal.so 00:15:46.264 1 904 libcrypto.so 00:15:46.264 ----------------------------------------------------- 00:15:46.264 00:15:46.264 17:45:35 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:15:46.264 17:45:35 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:46.264 17:45:35 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:46.264 17:45:35 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:46.264 17:45:35 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:15:46.264 17:45:35 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:46.264 17:45:35 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:46.264 17:45:35 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:46.265 17:45:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:46.265 17:45:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:46.265 17:45:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:46.265 17:45:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:46.265 17:45:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:46.265 17:45:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:46.265 17:45:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:46.265 17:45:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:46.265 17:45:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:46.265 17:45:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:46.265 17:45:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:46.265 17:45:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:46.265 17:45:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:46.265 17:45:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:46.265 17:45:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:46.265 17:45:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:46.265 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:46.265 fio-3.35 00:15:46.265 Starting 1 thread 00:16:04.432 00:16:04.432 test: (groupid=0, jobs=1): err= 0: pid=73437: Sun Oct 13 17:45:53 2024 00:16:04.432 read: IOPS=6612, BW=25.8MiB/s (27.1MB/s)(255MiB/9861msec) 00:16:04.432 slat (nsec): min=2949, max=20779, avg=4626.62, stdev=1096.52 00:16:04.432 clat (usec): min=1087, max=36201, avg=19351.15, stdev=2824.20 00:16:04.432 lat (usec): min=1095, max=36205, avg=19355.78, stdev=2824.20 00:16:04.432 clat percentiles (usec): 00:16:04.432 | 1.00th=[15401], 5.00th=[15795], 10.00th=[16057], 20.00th=[16712], 00:16:04.432 | 30.00th=[17433], 40.00th=[18220], 50.00th=[19268], 60.00th=[19792], 00:16:04.432 | 70.00th=[20579], 80.00th=[21365], 90.00th=[22676], 95.00th=[24773], 00:16:04.432 | 99.00th=[27657], 99.50th=[28705], 99.90th=[33162], 99.95th=[33817], 00:16:04.432 | 99.99th=[35390] 00:16:04.432 write: IOPS=9260, BW=36.2MiB/s (37.9MB/s)(256MiB/7077msec); 0 zone resets 00:16:04.432 slat (usec): min=4, max=475, avg= 7.57, stdev= 5.49 00:16:04.432 clat (usec): min=361, max=78181, avg=13750.10, stdev=15792.12 00:16:04.432 lat (usec): min=366, max=78187, avg=13757.67, stdev=15792.15 00:16:04.432 clat percentiles (usec): 00:16:04.432 | 1.00th=[ 963], 5.00th=[ 1303], 10.00th=[ 1500], 20.00th=[ 1811], 00:16:04.432 | 30.00th=[ 2180], 40.00th=[ 2933], 50.00th=[ 8225], 60.00th=[12125], 00:16:04.432 | 70.00th=[15664], 80.00th=[17957], 90.00th=[46924], 95.00th=[50070], 00:16:04.432 | 99.00th=[54264], 99.50th=[55837], 99.90th=[60556], 99.95th=[64750], 00:16:04.432 | 99.99th=[72877] 00:16:04.432 bw ( KiB/s): min= 3912, max=56968, per=94.36%, avg=34952.53, stdev=11111.86, samples=15 00:16:04.432 iops : min= 978, max=14242, avg=8738.13, stdev=2777.97, samples=15 00:16:04.432 lat (usec) : 500=0.01%, 750=0.10%, 1000=0.51% 00:16:04.432 lat (msec) : 2=12.34%, 4=7.87%, 10=6.56%, 20=44.88%, 50=25.15% 00:16:04.432 lat (msec) : 100=2.59% 00:16:04.432 cpu : usr=99.03%, sys=0.18%, ctx=26, majf=0, minf=5565 00:16:04.432 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:04.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.432 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:04.432 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.432 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:04.432 00:16:04.432 Run status group 0 (all jobs): 00:16:04.432 READ: bw=25.8MiB/s (27.1MB/s), 25.8MiB/s-25.8MiB/s (27.1MB/s-27.1MB/s), io=255MiB (267MB), run=9861-9861msec 00:16:04.432 WRITE: bw=36.2MiB/s (37.9MB/s), 36.2MiB/s-36.2MiB/s (37.9MB/s-37.9MB/s), io=256MiB (268MB), run=7077-7077msec 00:16:05.818 ----------------------------------------------------- 00:16:05.818 Suppressions used: 00:16:05.818 count bytes template 00:16:05.818 1 5 /usr/src/fio/parse.c 00:16:05.818 2 192 /usr/src/fio/iolog.c 00:16:05.818 1 8 libtcmalloc_minimal.so 00:16:05.818 1 904 libcrypto.so 00:16:05.818 ----------------------------------------------------- 00:16:05.818 00:16:05.818 17:45:55 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:05.818 17:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:05.818 17:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:05.818 17:45:55 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:05.818 Remove shared memory files 00:16:05.818 17:45:55 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:05.818 17:45:55 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:05.818 17:45:55 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:05.818 17:45:55 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:05.818 17:45:55 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57548 /dev/shm/spdk_tgt_trace.pid71714 00:16:05.818 17:45:55 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:05.818 17:45:55 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:05.818 ************************************ 00:16:05.818 END TEST ftl_fio_basic 00:16:05.818 ************************************ 00:16:05.818 00:16:05.818 real 1m11.763s 00:16:05.818 user 2m41.263s 00:16:05.818 sys 0m3.169s 00:16:05.818 17:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:05.818 17:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:05.818 17:45:55 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:05.818 17:45:55 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:05.818 17:45:55 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:05.818 17:45:55 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:05.818 ************************************ 00:16:05.818 START TEST ftl_bdevperf 00:16:05.818 ************************************ 00:16:05.818 17:45:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:05.818 * Looking for test storage... 00:16:05.818 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:05.818 17:45:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:05.818 17:45:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:05.818 17:45:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:16:05.818 17:45:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:05.818 17:45:55 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:05.818 17:45:55 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:05.818 17:45:55 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:05.818 17:45:55 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:16:05.818 17:45:55 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:16:05.818 17:45:55 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:16:05.818 17:45:55 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:16:05.818 17:45:55 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:16:05.818 17:45:55 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:16:05.818 17:45:55 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:16:05.818 17:45:55 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:05.818 17:45:55 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:16:05.818 17:45:55 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:16:05.818 17:45:55 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:05.818 17:45:55 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:05.818 17:45:55 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:16:05.818 17:45:55 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:16:05.818 17:45:55 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:05.818 17:45:55 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:16:05.818 17:45:55 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:05.818 17:45:55 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:16:05.818 17:45:55 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:16:05.818 17:45:55 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:05.818 17:45:55 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:16:05.818 17:45:55 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:05.818 17:45:55 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:05.818 17:45:55 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:05.818 17:45:55 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:16:05.818 17:45:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:05.818 17:45:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:05.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.818 --rc genhtml_branch_coverage=1 00:16:05.818 --rc genhtml_function_coverage=1 00:16:05.818 --rc genhtml_legend=1 00:16:05.818 --rc geninfo_all_blocks=1 00:16:05.818 --rc geninfo_unexecuted_blocks=1 00:16:05.818 00:16:05.818 ' 00:16:05.818 17:45:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:05.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.818 --rc genhtml_branch_coverage=1 00:16:05.818 --rc genhtml_function_coverage=1 00:16:05.818 --rc genhtml_legend=1 00:16:05.818 --rc geninfo_all_blocks=1 00:16:05.818 --rc geninfo_unexecuted_blocks=1 00:16:05.818 00:16:05.818 ' 00:16:05.818 17:45:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:05.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.818 --rc genhtml_branch_coverage=1 00:16:05.818 --rc genhtml_function_coverage=1 00:16:05.818 --rc genhtml_legend=1 00:16:05.818 --rc geninfo_all_blocks=1 00:16:05.818 --rc geninfo_unexecuted_blocks=1 00:16:05.818 00:16:05.818 ' 00:16:05.818 17:45:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:05.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.818 --rc genhtml_branch_coverage=1 00:16:05.818 --rc genhtml_function_coverage=1 00:16:05.818 --rc genhtml_legend=1 00:16:05.818 --rc geninfo_all_blocks=1 00:16:05.818 --rc geninfo_unexecuted_blocks=1 00:16:05.818 00:16:05.818 ' 00:16:05.819 17:45:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:05.819 17:45:55 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:05.819 17:45:55 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=73711 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 73711 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 73711 ']' 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:06.081 17:45:55 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:06.081 [2024-10-13 17:45:55.723028] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:16:06.081 [2024-10-13 17:45:55.723432] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73711 ] 00:16:06.081 [2024-10-13 17:45:55.881488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.342 [2024-10-13 17:45:56.041815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.916 17:45:56 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:06.916 17:45:56 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:16:06.916 17:45:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:06.916 17:45:56 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:16:06.916 17:45:56 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:06.916 17:45:56 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:16:06.916 17:45:56 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:16:06.916 17:45:56 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:07.177 17:45:56 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:07.177 17:45:56 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:16:07.177 17:45:56 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:07.177 17:45:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:07.177 17:45:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:07.178 17:45:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:07.178 17:45:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:07.178 17:45:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:07.439 17:45:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:07.439 { 00:16:07.439 "name": "nvme0n1", 00:16:07.439 "aliases": [ 00:16:07.439 "dbe06fa6-4c98-4805-8b10-53704766ec44" 00:16:07.439 ], 00:16:07.439 "product_name": "NVMe disk", 00:16:07.439 "block_size": 4096, 00:16:07.439 "num_blocks": 1310720, 00:16:07.439 "uuid": "dbe06fa6-4c98-4805-8b10-53704766ec44", 00:16:07.439 "numa_id": -1, 00:16:07.439 "assigned_rate_limits": { 00:16:07.439 "rw_ios_per_sec": 0, 00:16:07.439 "rw_mbytes_per_sec": 0, 00:16:07.439 "r_mbytes_per_sec": 0, 00:16:07.439 "w_mbytes_per_sec": 0 00:16:07.439 }, 00:16:07.439 "claimed": true, 00:16:07.439 "claim_type": "read_many_write_one", 00:16:07.439 "zoned": false, 00:16:07.439 "supported_io_types": { 00:16:07.439 "read": true, 00:16:07.439 "write": true, 00:16:07.439 "unmap": true, 00:16:07.439 "flush": true, 00:16:07.439 "reset": true, 00:16:07.439 "nvme_admin": true, 00:16:07.439 "nvme_io": true, 00:16:07.439 "nvme_io_md": false, 00:16:07.439 "write_zeroes": true, 00:16:07.439 "zcopy": false, 00:16:07.439 "get_zone_info": false, 00:16:07.439 "zone_management": false, 00:16:07.439 "zone_append": false, 00:16:07.439 "compare": true, 00:16:07.439 "compare_and_write": false, 00:16:07.439 "abort": true, 00:16:07.439 "seek_hole": false, 00:16:07.439 "seek_data": false, 00:16:07.439 "copy": true, 00:16:07.439 "nvme_iov_md": false 00:16:07.439 }, 00:16:07.439 "driver_specific": { 00:16:07.439 "nvme": [ 00:16:07.439 { 00:16:07.439 "pci_address": "0000:00:11.0", 00:16:07.439 "trid": { 00:16:07.439 "trtype": "PCIe", 00:16:07.439 "traddr": "0000:00:11.0" 00:16:07.439 }, 00:16:07.439 "ctrlr_data": { 00:16:07.439 "cntlid": 0, 00:16:07.439 "vendor_id": "0x1b36", 00:16:07.439 "model_number": "QEMU NVMe Ctrl", 00:16:07.439 "serial_number": "12341", 00:16:07.439 "firmware_revision": "8.0.0", 00:16:07.439 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:07.439 "oacs": { 00:16:07.439 "security": 0, 00:16:07.439 "format": 1, 00:16:07.439 "firmware": 0, 00:16:07.439 "ns_manage": 1 00:16:07.439 }, 00:16:07.439 "multi_ctrlr": false, 00:16:07.439 "ana_reporting": false 00:16:07.439 }, 00:16:07.439 "vs": { 00:16:07.439 "nvme_version": "1.4" 00:16:07.439 }, 00:16:07.439 "ns_data": { 00:16:07.439 "id": 1, 00:16:07.439 "can_share": false 00:16:07.439 } 00:16:07.439 } 00:16:07.439 ], 00:16:07.439 "mp_policy": "active_passive" 00:16:07.439 } 00:16:07.439 } 00:16:07.439 ]' 00:16:07.439 17:45:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:07.439 17:45:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:07.439 17:45:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:07.439 17:45:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:07.439 17:45:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:07.439 17:45:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:16:07.439 17:45:57 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:16:07.439 17:45:57 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:07.439 17:45:57 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:16:07.439 17:45:57 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:07.439 17:45:57 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:07.701 17:45:57 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=967b71c8-f15f-452b-bdfc-c895d573925c 00:16:07.701 17:45:57 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:16:07.701 17:45:57 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 967b71c8-f15f-452b-bdfc-c895d573925c 00:16:07.963 17:45:57 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:08.227 17:45:57 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=3888edce-4b5d-4126-b771-5c8e0407c7ad 00:16:08.227 17:45:57 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3888edce-4b5d-4126-b771-5c8e0407c7ad 00:16:08.489 17:45:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=63da2479-f36e-45d3-a8c4-72c5d0f1bdcc 00:16:08.489 17:45:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 63da2479-f36e-45d3-a8c4-72c5d0f1bdcc 00:16:08.489 17:45:58 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:16:08.489 17:45:58 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:08.489 17:45:58 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=63da2479-f36e-45d3-a8c4-72c5d0f1bdcc 00:16:08.489 17:45:58 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:16:08.489 17:45:58 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 63da2479-f36e-45d3-a8c4-72c5d0f1bdcc 00:16:08.489 17:45:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=63da2479-f36e-45d3-a8c4-72c5d0f1bdcc 00:16:08.489 17:45:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:08.489 17:45:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:08.489 17:45:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:08.489 17:45:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 63da2479-f36e-45d3-a8c4-72c5d0f1bdcc 00:16:08.489 17:45:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:08.489 { 00:16:08.489 "name": "63da2479-f36e-45d3-a8c4-72c5d0f1bdcc", 00:16:08.489 "aliases": [ 00:16:08.489 "lvs/nvme0n1p0" 00:16:08.489 ], 00:16:08.489 "product_name": "Logical Volume", 00:16:08.489 "block_size": 4096, 00:16:08.489 "num_blocks": 26476544, 00:16:08.489 "uuid": "63da2479-f36e-45d3-a8c4-72c5d0f1bdcc", 00:16:08.489 "assigned_rate_limits": { 00:16:08.489 "rw_ios_per_sec": 0, 00:16:08.489 "rw_mbytes_per_sec": 0, 00:16:08.489 "r_mbytes_per_sec": 0, 00:16:08.489 "w_mbytes_per_sec": 0 00:16:08.489 }, 00:16:08.489 "claimed": false, 00:16:08.489 "zoned": false, 00:16:08.489 "supported_io_types": { 00:16:08.489 "read": true, 00:16:08.489 "write": true, 00:16:08.489 "unmap": true, 00:16:08.489 "flush": false, 00:16:08.489 "reset": true, 00:16:08.489 "nvme_admin": false, 00:16:08.489 "nvme_io": false, 00:16:08.489 "nvme_io_md": false, 00:16:08.489 "write_zeroes": true, 00:16:08.489 "zcopy": false, 00:16:08.489 "get_zone_info": false, 00:16:08.489 "zone_management": false, 00:16:08.489 "zone_append": false, 00:16:08.489 "compare": false, 00:16:08.489 "compare_and_write": false, 00:16:08.489 "abort": false, 00:16:08.489 "seek_hole": true, 00:16:08.489 "seek_data": true, 00:16:08.489 "copy": false, 00:16:08.489 "nvme_iov_md": false 00:16:08.489 }, 00:16:08.489 "driver_specific": { 00:16:08.489 "lvol": { 00:16:08.489 "lvol_store_uuid": "3888edce-4b5d-4126-b771-5c8e0407c7ad", 00:16:08.489 "base_bdev": "nvme0n1", 00:16:08.489 "thin_provision": true, 00:16:08.489 "num_allocated_clusters": 0, 00:16:08.489 "snapshot": false, 00:16:08.489 "clone": false, 00:16:08.489 "esnap_clone": false 00:16:08.489 } 00:16:08.489 } 00:16:08.489 } 00:16:08.489 ]' 00:16:08.489 17:45:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:08.750 17:45:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:08.750 17:45:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:08.750 17:45:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:08.750 17:45:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:08.750 17:45:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:08.750 17:45:58 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:16:08.750 17:45:58 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:16:08.750 17:45:58 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:09.010 17:45:58 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:09.010 17:45:58 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:09.010 17:45:58 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 63da2479-f36e-45d3-a8c4-72c5d0f1bdcc 00:16:09.010 17:45:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=63da2479-f36e-45d3-a8c4-72c5d0f1bdcc 00:16:09.010 17:45:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:09.010 17:45:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:09.010 17:45:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:09.010 17:45:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 63da2479-f36e-45d3-a8c4-72c5d0f1bdcc 00:16:09.270 17:45:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:09.270 { 00:16:09.270 "name": "63da2479-f36e-45d3-a8c4-72c5d0f1bdcc", 00:16:09.271 "aliases": [ 00:16:09.271 "lvs/nvme0n1p0" 00:16:09.271 ], 00:16:09.271 "product_name": "Logical Volume", 00:16:09.271 "block_size": 4096, 00:16:09.271 "num_blocks": 26476544, 00:16:09.271 "uuid": "63da2479-f36e-45d3-a8c4-72c5d0f1bdcc", 00:16:09.271 "assigned_rate_limits": { 00:16:09.271 "rw_ios_per_sec": 0, 00:16:09.271 "rw_mbytes_per_sec": 0, 00:16:09.271 "r_mbytes_per_sec": 0, 00:16:09.271 "w_mbytes_per_sec": 0 00:16:09.271 }, 00:16:09.271 "claimed": false, 00:16:09.271 "zoned": false, 00:16:09.271 "supported_io_types": { 00:16:09.271 "read": true, 00:16:09.271 "write": true, 00:16:09.271 "unmap": true, 00:16:09.271 "flush": false, 00:16:09.271 "reset": true, 00:16:09.271 "nvme_admin": false, 00:16:09.271 "nvme_io": false, 00:16:09.271 "nvme_io_md": false, 00:16:09.271 "write_zeroes": true, 00:16:09.271 "zcopy": false, 00:16:09.271 "get_zone_info": false, 00:16:09.271 "zone_management": false, 00:16:09.271 "zone_append": false, 00:16:09.271 "compare": false, 00:16:09.271 "compare_and_write": false, 00:16:09.271 "abort": false, 00:16:09.271 "seek_hole": true, 00:16:09.271 "seek_data": true, 00:16:09.271 "copy": false, 00:16:09.271 "nvme_iov_md": false 00:16:09.271 }, 00:16:09.271 "driver_specific": { 00:16:09.271 "lvol": { 00:16:09.271 "lvol_store_uuid": "3888edce-4b5d-4126-b771-5c8e0407c7ad", 00:16:09.271 "base_bdev": "nvme0n1", 00:16:09.271 "thin_provision": true, 00:16:09.271 "num_allocated_clusters": 0, 00:16:09.271 "snapshot": false, 00:16:09.271 "clone": false, 00:16:09.271 "esnap_clone": false 00:16:09.271 } 00:16:09.271 } 00:16:09.271 } 00:16:09.271 ]' 00:16:09.271 17:45:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:09.271 17:45:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:09.271 17:45:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:09.271 17:45:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:09.271 17:45:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:09.271 17:45:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:09.271 17:45:58 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:16:09.271 17:45:58 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:09.532 17:45:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:16:09.532 17:45:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 63da2479-f36e-45d3-a8c4-72c5d0f1bdcc 00:16:09.532 17:45:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=63da2479-f36e-45d3-a8c4-72c5d0f1bdcc 00:16:09.532 17:45:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:09.532 17:45:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:09.532 17:45:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:09.532 17:45:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 63da2479-f36e-45d3-a8c4-72c5d0f1bdcc 00:16:09.532 17:45:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:09.532 { 00:16:09.532 "name": "63da2479-f36e-45d3-a8c4-72c5d0f1bdcc", 00:16:09.532 "aliases": [ 00:16:09.532 "lvs/nvme0n1p0" 00:16:09.532 ], 00:16:09.532 "product_name": "Logical Volume", 00:16:09.532 "block_size": 4096, 00:16:09.532 "num_blocks": 26476544, 00:16:09.532 "uuid": "63da2479-f36e-45d3-a8c4-72c5d0f1bdcc", 00:16:09.532 "assigned_rate_limits": { 00:16:09.532 "rw_ios_per_sec": 0, 00:16:09.532 "rw_mbytes_per_sec": 0, 00:16:09.532 "r_mbytes_per_sec": 0, 00:16:09.532 "w_mbytes_per_sec": 0 00:16:09.532 }, 00:16:09.532 "claimed": false, 00:16:09.532 "zoned": false, 00:16:09.532 "supported_io_types": { 00:16:09.532 "read": true, 00:16:09.532 "write": true, 00:16:09.532 "unmap": true, 00:16:09.532 "flush": false, 00:16:09.532 "reset": true, 00:16:09.532 "nvme_admin": false, 00:16:09.532 "nvme_io": false, 00:16:09.532 "nvme_io_md": false, 00:16:09.532 "write_zeroes": true, 00:16:09.532 "zcopy": false, 00:16:09.532 "get_zone_info": false, 00:16:09.532 "zone_management": false, 00:16:09.532 "zone_append": false, 00:16:09.532 "compare": false, 00:16:09.532 "compare_and_write": false, 00:16:09.532 "abort": false, 00:16:09.532 "seek_hole": true, 00:16:09.532 "seek_data": true, 00:16:09.532 "copy": false, 00:16:09.532 "nvme_iov_md": false 00:16:09.532 }, 00:16:09.532 "driver_specific": { 00:16:09.532 "lvol": { 00:16:09.532 "lvol_store_uuid": "3888edce-4b5d-4126-b771-5c8e0407c7ad", 00:16:09.532 "base_bdev": "nvme0n1", 00:16:09.532 "thin_provision": true, 00:16:09.532 "num_allocated_clusters": 0, 00:16:09.532 "snapshot": false, 00:16:09.532 "clone": false, 00:16:09.532 "esnap_clone": false 00:16:09.532 } 00:16:09.532 } 00:16:09.532 } 00:16:09.532 ]' 00:16:09.532 17:45:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:09.532 17:45:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:09.532 17:45:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:09.794 17:45:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:09.794 17:45:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:09.794 17:45:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:09.794 17:45:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:16:09.794 17:45:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 63da2479-f36e-45d3-a8c4-72c5d0f1bdcc -c nvc0n1p0 --l2p_dram_limit 20 00:16:09.794 [2024-10-13 17:45:59.547095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.794 [2024-10-13 17:45:59.547143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:09.794 [2024-10-13 17:45:59.547155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:09.794 [2024-10-13 17:45:59.547163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.794 [2024-10-13 17:45:59.547208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.794 [2024-10-13 17:45:59.547218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:09.794 [2024-10-13 17:45:59.547226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:16:09.794 [2024-10-13 17:45:59.547250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.794 [2024-10-13 17:45:59.547265] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:09.794 [2024-10-13 17:45:59.547926] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:09.794 [2024-10-13 17:45:59.547946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.794 [2024-10-13 17:45:59.547956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:09.794 [2024-10-13 17:45:59.547964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.685 ms 00:16:09.794 [2024-10-13 17:45:59.547971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.794 [2024-10-13 17:45:59.547997] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d4c2ed5f-07d7-4796-8265-26bed90c1e59 00:16:09.794 [2024-10-13 17:45:59.549322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.794 [2024-10-13 17:45:59.549352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:09.794 [2024-10-13 17:45:59.549367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:16:09.794 [2024-10-13 17:45:59.549375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.794 [2024-10-13 17:45:59.556463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.794 [2024-10-13 17:45:59.556495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:09.794 [2024-10-13 17:45:59.556504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.049 ms 00:16:09.794 [2024-10-13 17:45:59.556511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.794 [2024-10-13 17:45:59.556635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.794 [2024-10-13 17:45:59.556644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:09.794 [2024-10-13 17:45:59.556658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:16:09.794 [2024-10-13 17:45:59.556664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.794 [2024-10-13 17:45:59.556718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.794 [2024-10-13 17:45:59.556726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:09.794 [2024-10-13 17:45:59.556734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:09.794 [2024-10-13 17:45:59.556739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.794 [2024-10-13 17:45:59.556758] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:09.794 [2024-10-13 17:45:59.560053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.794 [2024-10-13 17:45:59.560081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:09.794 [2024-10-13 17:45:59.560089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.303 ms 00:16:09.794 [2024-10-13 17:45:59.560097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.794 [2024-10-13 17:45:59.560122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.794 [2024-10-13 17:45:59.560131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:09.794 [2024-10-13 17:45:59.560139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:09.794 [2024-10-13 17:45:59.560147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.794 [2024-10-13 17:45:59.560158] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:09.794 [2024-10-13 17:45:59.560280] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:09.794 [2024-10-13 17:45:59.560290] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:09.794 [2024-10-13 17:45:59.560302] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:09.794 [2024-10-13 17:45:59.560310] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:09.794 [2024-10-13 17:45:59.560320] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:09.794 [2024-10-13 17:45:59.560326] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:09.794 [2024-10-13 17:45:59.560335] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:09.794 [2024-10-13 17:45:59.560340] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:09.794 [2024-10-13 17:45:59.560347] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:09.794 [2024-10-13 17:45:59.560354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.794 [2024-10-13 17:45:59.560361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:09.794 [2024-10-13 17:45:59.560367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:16:09.794 [2024-10-13 17:45:59.560376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.794 [2024-10-13 17:45:59.560438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.794 [2024-10-13 17:45:59.560447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:09.794 [2024-10-13 17:45:59.560453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:09.794 [2024-10-13 17:45:59.560461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.794 [2024-10-13 17:45:59.560530] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:09.794 [2024-10-13 17:45:59.560539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:09.794 [2024-10-13 17:45:59.560545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:09.794 [2024-10-13 17:45:59.560552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:09.794 [2024-10-13 17:45:59.560575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:09.794 [2024-10-13 17:45:59.560583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:09.794 [2024-10-13 17:45:59.560588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:09.794 [2024-10-13 17:45:59.560595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:09.794 [2024-10-13 17:45:59.560600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:09.794 [2024-10-13 17:45:59.560607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:09.794 [2024-10-13 17:45:59.560612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:09.794 [2024-10-13 17:45:59.560619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:09.795 [2024-10-13 17:45:59.560624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:09.795 [2024-10-13 17:45:59.560640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:09.795 [2024-10-13 17:45:59.560646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:09.795 [2024-10-13 17:45:59.560653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:09.795 [2024-10-13 17:45:59.560658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:09.795 [2024-10-13 17:45:59.560666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:09.795 [2024-10-13 17:45:59.560671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:09.795 [2024-10-13 17:45:59.560677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:09.795 [2024-10-13 17:45:59.560682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:09.795 [2024-10-13 17:45:59.560690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:09.795 [2024-10-13 17:45:59.560695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:09.795 [2024-10-13 17:45:59.560701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:09.795 [2024-10-13 17:45:59.560707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:09.795 [2024-10-13 17:45:59.560713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:09.795 [2024-10-13 17:45:59.560717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:09.795 [2024-10-13 17:45:59.560724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:09.795 [2024-10-13 17:45:59.560729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:09.795 [2024-10-13 17:45:59.560735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:09.795 [2024-10-13 17:45:59.560740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:09.795 [2024-10-13 17:45:59.560749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:09.795 [2024-10-13 17:45:59.560754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:09.795 [2024-10-13 17:45:59.560761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:09.795 [2024-10-13 17:45:59.560766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:09.795 [2024-10-13 17:45:59.560772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:09.795 [2024-10-13 17:45:59.560777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:09.795 [2024-10-13 17:45:59.560783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:09.795 [2024-10-13 17:45:59.560788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:09.795 [2024-10-13 17:45:59.560795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:09.795 [2024-10-13 17:45:59.560800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:09.795 [2024-10-13 17:45:59.560807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:09.795 [2024-10-13 17:45:59.560812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:09.795 [2024-10-13 17:45:59.560818] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:09.795 [2024-10-13 17:45:59.560826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:09.795 [2024-10-13 17:45:59.560835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:09.795 [2024-10-13 17:45:59.560841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:09.795 [2024-10-13 17:45:59.560851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:09.795 [2024-10-13 17:45:59.560857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:09.795 [2024-10-13 17:45:59.560864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:09.795 [2024-10-13 17:45:59.560869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:09.795 [2024-10-13 17:45:59.560876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:09.795 [2024-10-13 17:45:59.560881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:09.795 [2024-10-13 17:45:59.560891] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:09.795 [2024-10-13 17:45:59.560898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:09.795 [2024-10-13 17:45:59.560907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:09.795 [2024-10-13 17:45:59.560913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:09.795 [2024-10-13 17:45:59.560921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:09.795 [2024-10-13 17:45:59.560927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:09.795 [2024-10-13 17:45:59.560935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:09.795 [2024-10-13 17:45:59.560941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:09.795 [2024-10-13 17:45:59.560948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:09.795 [2024-10-13 17:45:59.560954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:09.795 [2024-10-13 17:45:59.560963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:09.795 [2024-10-13 17:45:59.560975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:09.795 [2024-10-13 17:45:59.560982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:09.795 [2024-10-13 17:45:59.560987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:09.795 [2024-10-13 17:45:59.560995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:09.795 [2024-10-13 17:45:59.561001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:09.795 [2024-10-13 17:45:59.561008] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:09.795 [2024-10-13 17:45:59.561016] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:09.795 [2024-10-13 17:45:59.561025] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:09.795 [2024-10-13 17:45:59.561035] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:09.795 [2024-10-13 17:45:59.561042] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:09.795 [2024-10-13 17:45:59.561048] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:09.795 [2024-10-13 17:45:59.561057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.795 [2024-10-13 17:45:59.561063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:09.795 [2024-10-13 17:45:59.561071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.577 ms 00:16:09.795 [2024-10-13 17:45:59.561079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.795 [2024-10-13 17:45:59.561119] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:09.795 [2024-10-13 17:45:59.561129] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:15.086 [2024-10-13 17:46:04.002460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.086 [2024-10-13 17:46:04.002869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:15.086 [2024-10-13 17:46:04.002911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4441.316 ms 00:16:15.086 [2024-10-13 17:46:04.002923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.086 [2024-10-13 17:46:04.041201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.086 [2024-10-13 17:46:04.041272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:15.086 [2024-10-13 17:46:04.041296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.998 ms 00:16:15.086 [2024-10-13 17:46:04.041306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.086 [2024-10-13 17:46:04.041461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.086 [2024-10-13 17:46:04.041473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:15.086 [2024-10-13 17:46:04.041490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:15.086 [2024-10-13 17:46:04.041499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.086 [2024-10-13 17:46:04.095178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.086 [2024-10-13 17:46:04.095247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:15.086 [2024-10-13 17:46:04.095267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.639 ms 00:16:15.086 [2024-10-13 17:46:04.095277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.086 [2024-10-13 17:46:04.095327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.086 [2024-10-13 17:46:04.095337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:15.086 [2024-10-13 17:46:04.095350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:15.086 [2024-10-13 17:46:04.095362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.086 [2024-10-13 17:46:04.096208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.086 [2024-10-13 17:46:04.096249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:15.086 [2024-10-13 17:46:04.096265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.735 ms 00:16:15.086 [2024-10-13 17:46:04.096276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.086 [2024-10-13 17:46:04.096423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.086 [2024-10-13 17:46:04.096443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:15.086 [2024-10-13 17:46:04.096460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:16:15.086 [2024-10-13 17:46:04.096468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.086 [2024-10-13 17:46:04.114934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.086 [2024-10-13 17:46:04.115163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:15.086 [2024-10-13 17:46:04.115190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.443 ms 00:16:15.086 [2024-10-13 17:46:04.115199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.086 [2024-10-13 17:46:04.130253] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:15.086 [2024-10-13 17:46:04.139817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.086 [2024-10-13 17:46:04.139875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:15.086 [2024-10-13 17:46:04.139889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.517 ms 00:16:15.086 [2024-10-13 17:46:04.139900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.086 [2024-10-13 17:46:04.248778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.086 [2024-10-13 17:46:04.249015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:15.086 [2024-10-13 17:46:04.249041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 108.845 ms 00:16:15.086 [2024-10-13 17:46:04.249053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.086 [2024-10-13 17:46:04.249272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.086 [2024-10-13 17:46:04.249291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:15.086 [2024-10-13 17:46:04.249302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:16:15.086 [2024-10-13 17:46:04.249313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.086 [2024-10-13 17:46:04.276516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.086 [2024-10-13 17:46:04.276760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:15.086 [2024-10-13 17:46:04.276787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.146 ms 00:16:15.086 [2024-10-13 17:46:04.276799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.086 [2024-10-13 17:46:04.302509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.086 [2024-10-13 17:46:04.302596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:15.086 [2024-10-13 17:46:04.302612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.663 ms 00:16:15.086 [2024-10-13 17:46:04.302623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.086 [2024-10-13 17:46:04.303269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.086 [2024-10-13 17:46:04.303303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:15.086 [2024-10-13 17:46:04.303313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:16:15.086 [2024-10-13 17:46:04.303324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.086 [2024-10-13 17:46:04.397631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.086 [2024-10-13 17:46:04.397697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:15.086 [2024-10-13 17:46:04.397714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.235 ms 00:16:15.086 [2024-10-13 17:46:04.397726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.086 [2024-10-13 17:46:04.426963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.086 [2024-10-13 17:46:04.427177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:15.086 [2024-10-13 17:46:04.427201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.141 ms 00:16:15.086 [2024-10-13 17:46:04.427213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.087 [2024-10-13 17:46:04.454473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.087 [2024-10-13 17:46:04.454697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:15.087 [2024-10-13 17:46:04.454719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.210 ms 00:16:15.087 [2024-10-13 17:46:04.454730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.087 [2024-10-13 17:46:04.483107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.087 [2024-10-13 17:46:04.483175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:15.087 [2024-10-13 17:46:04.483193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.963 ms 00:16:15.087 [2024-10-13 17:46:04.483204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.087 [2024-10-13 17:46:04.483263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.087 [2024-10-13 17:46:04.483284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:15.087 [2024-10-13 17:46:04.483294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:15.087 [2024-10-13 17:46:04.483306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.087 [2024-10-13 17:46:04.483424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.087 [2024-10-13 17:46:04.483439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:15.087 [2024-10-13 17:46:04.483449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:16:15.087 [2024-10-13 17:46:04.483460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.087 [2024-10-13 17:46:04.484982] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4937.249 ms, result 0 00:16:15.087 { 00:16:15.087 "name": "ftl0", 00:16:15.087 "uuid": "d4c2ed5f-07d7-4796-8265-26bed90c1e59" 00:16:15.087 } 00:16:15.087 17:46:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:15.087 17:46:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:16:15.087 17:46:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:16:15.087 17:46:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:15.087 [2024-10-13 17:46:04.820917] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:15.087 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:15.087 Zero copy mechanism will not be used. 00:16:15.087 Running I/O for 4 seconds... 00:16:17.414 690.00 IOPS, 45.82 MiB/s [2024-10-13T17:46:08.168Z] 738.50 IOPS, 49.04 MiB/s [2024-10-13T17:46:09.147Z] 751.67 IOPS, 49.92 MiB/s [2024-10-13T17:46:09.147Z] 803.75 IOPS, 53.37 MiB/s 00:16:19.333 Latency(us) 00:16:19.333 [2024-10-13T17:46:09.147Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.333 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:19.333 ftl0 : 4.00 803.73 53.37 0.00 0.00 1314.35 188.26 2457.60 00:16:19.333 [2024-10-13T17:46:09.147Z] =================================================================================================================== 00:16:19.333 [2024-10-13T17:46:09.147Z] Total : 803.73 53.37 0.00 0.00 1314.35 188.26 2457.60 00:16:19.333 [2024-10-13 17:46:08.831692] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:19.333 { 00:16:19.333 "results": [ 00:16:19.333 { 00:16:19.333 "job": "ftl0", 00:16:19.333 "core_mask": "0x1", 00:16:19.333 "workload": "randwrite", 00:16:19.333 "status": "finished", 00:16:19.333 "queue_depth": 1, 00:16:19.333 "io_size": 69632, 00:16:19.333 "runtime": 4.00136, 00:16:19.333 "iops": 803.7267329108104, 00:16:19.333 "mibps": 53.3724783573585, 00:16:19.333 "io_failed": 0, 00:16:19.333 "io_timeout": 0, 00:16:19.333 "avg_latency_us": 1314.350585533869, 00:16:19.333 "min_latency_us": 188.25846153846155, 00:16:19.333 "max_latency_us": 2457.6 00:16:19.333 } 00:16:19.333 ], 00:16:19.333 "core_count": 1 00:16:19.333 } 00:16:19.333 17:46:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:19.333 [2024-10-13 17:46:08.948993] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:19.333 Running I/O for 4 seconds... 00:16:21.222 7030.00 IOPS, 27.46 MiB/s [2024-10-13T17:46:11.980Z] 5810.00 IOPS, 22.70 MiB/s [2024-10-13T17:46:13.368Z] 5432.33 IOPS, 21.22 MiB/s [2024-10-13T17:46:13.368Z] 5549.50 IOPS, 21.68 MiB/s 00:16:23.554 Latency(us) 00:16:23.554 [2024-10-13T17:46:13.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:23.554 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:23.554 ftl0 : 4.02 5552.75 21.69 0.00 0.00 22996.52 293.02 45572.73 00:16:23.554 [2024-10-13T17:46:13.368Z] =================================================================================================================== 00:16:23.554 [2024-10-13T17:46:13.368Z] Total : 5552.75 21.69 0.00 0.00 22996.52 0.00 45572.73 00:16:23.554 [2024-10-13 17:46:12.975930] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:23.554 { 00:16:23.554 "results": [ 00:16:23.554 { 00:16:23.554 "job": "ftl0", 00:16:23.554 "core_mask": "0x1", 00:16:23.554 "workload": "randwrite", 00:16:23.554 "status": "finished", 00:16:23.554 "queue_depth": 128, 00:16:23.554 "io_size": 4096, 00:16:23.554 "runtime": 4.018906, 00:16:23.554 "iops": 5552.754903946497, 00:16:23.554 "mibps": 21.690448843541002, 00:16:23.554 "io_failed": 0, 00:16:23.554 "io_timeout": 0, 00:16:23.554 "avg_latency_us": 22996.51900340563, 00:16:23.554 "min_latency_us": 293.02153846153846, 00:16:23.554 "max_latency_us": 45572.72615384615 00:16:23.554 } 00:16:23.554 ], 00:16:23.554 "core_count": 1 00:16:23.554 } 00:16:23.554 17:46:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:23.554 [2024-10-13 17:46:13.087725] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:23.554 Running I/O for 4 seconds... 00:16:25.443 4437.00 IOPS, 17.33 MiB/s [2024-10-13T17:46:16.202Z] 4377.50 IOPS, 17.10 MiB/s [2024-10-13T17:46:17.146Z] 4691.67 IOPS, 18.33 MiB/s [2024-10-13T17:46:17.146Z] 4930.25 IOPS, 19.26 MiB/s 00:16:27.332 Latency(us) 00:16:27.332 [2024-10-13T17:46:17.146Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.332 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:27.332 Verification LBA range: start 0x0 length 0x1400000 00:16:27.332 ftl0 : 4.01 4946.71 19.32 0.00 0.00 25807.61 327.68 37506.76 00:16:27.332 [2024-10-13T17:46:17.146Z] =================================================================================================================== 00:16:27.332 [2024-10-13T17:46:17.146Z] Total : 4946.71 19.32 0.00 0.00 25807.61 0.00 37506.76 00:16:27.332 [2024-10-13 17:46:17.116743] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:27.332 { 00:16:27.332 "results": [ 00:16:27.332 { 00:16:27.332 "job": "ftl0", 00:16:27.332 "core_mask": "0x1", 00:16:27.332 "workload": "verify", 00:16:27.332 "status": "finished", 00:16:27.332 "verify_range": { 00:16:27.332 "start": 0, 00:16:27.332 "length": 20971520 00:16:27.332 }, 00:16:27.332 "queue_depth": 128, 00:16:27.332 "io_size": 4096, 00:16:27.332 "runtime": 4.01216, 00:16:27.332 "iops": 4946.711995533578, 00:16:27.332 "mibps": 19.32309373255304, 00:16:27.332 "io_failed": 0, 00:16:27.332 "io_timeout": 0, 00:16:27.332 "avg_latency_us": 25807.611514237764, 00:16:27.332 "min_latency_us": 327.68, 00:16:27.332 "max_latency_us": 37506.75692307692 00:16:27.332 } 00:16:27.332 ], 00:16:27.332 "core_count": 1 00:16:27.332 } 00:16:27.332 17:46:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:27.593 [2024-10-13 17:46:17.336908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.593 [2024-10-13 17:46:17.337180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:27.593 [2024-10-13 17:46:17.337406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:27.593 [2024-10-13 17:46:17.337426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.593 [2024-10-13 17:46:17.337469] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:27.593 [2024-10-13 17:46:17.340819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.593 [2024-10-13 17:46:17.340864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:27.593 [2024-10-13 17:46:17.340881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.324 ms 00:16:27.593 [2024-10-13 17:46:17.340890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.593 [2024-10-13 17:46:17.344149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.593 [2024-10-13 17:46:17.344328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:27.593 [2024-10-13 17:46:17.344356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.220 ms 00:16:27.593 [2024-10-13 17:46:17.344366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.854 [2024-10-13 17:46:17.558259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.854 [2024-10-13 17:46:17.558320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:27.854 [2024-10-13 17:46:17.558347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 213.861 ms 00:16:27.854 [2024-10-13 17:46:17.558356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.854 [2024-10-13 17:46:17.564635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.854 [2024-10-13 17:46:17.564675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:27.854 [2024-10-13 17:46:17.564690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.233 ms 00:16:27.854 [2024-10-13 17:46:17.564698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.854 [2024-10-13 17:46:17.592119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.854 [2024-10-13 17:46:17.592166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:27.854 [2024-10-13 17:46:17.592191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.356 ms 00:16:27.854 [2024-10-13 17:46:17.592200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.854 [2024-10-13 17:46:17.609775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.854 [2024-10-13 17:46:17.609960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:27.854 [2024-10-13 17:46:17.609992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.522 ms 00:16:27.854 [2024-10-13 17:46:17.610001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.854 [2024-10-13 17:46:17.610189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.854 [2024-10-13 17:46:17.610202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:27.854 [2024-10-13 17:46:17.610218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:16:27.854 [2024-10-13 17:46:17.610226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.854 [2024-10-13 17:46:17.635761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.854 [2024-10-13 17:46:17.635804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:27.854 [2024-10-13 17:46:17.635820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.513 ms 00:16:27.854 [2024-10-13 17:46:17.635828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.854 [2024-10-13 17:46:17.660765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.854 [2024-10-13 17:46:17.660924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:27.854 [2024-10-13 17:46:17.660949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.889 ms 00:16:27.854 [2024-10-13 17:46:17.660957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.116 [2024-10-13 17:46:17.685149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.116 [2024-10-13 17:46:17.685195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:28.116 [2024-10-13 17:46:17.685209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.152 ms 00:16:28.116 [2024-10-13 17:46:17.685217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.116 [2024-10-13 17:46:17.709764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.116 [2024-10-13 17:46:17.709811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:28.116 [2024-10-13 17:46:17.709830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.458 ms 00:16:28.116 [2024-10-13 17:46:17.709838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.116 [2024-10-13 17:46:17.709883] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:28.116 [2024-10-13 17:46:17.709900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:28.116 [2024-10-13 17:46:17.709914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:28.116 [2024-10-13 17:46:17.709924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:28.116 [2024-10-13 17:46:17.709935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:28.116 [2024-10-13 17:46:17.709943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:28.116 [2024-10-13 17:46:17.709954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:28.116 [2024-10-13 17:46:17.709962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:28.116 [2024-10-13 17:46:17.709972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:28.116 [2024-10-13 17:46:17.709980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.709991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.709999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:28.117 [2024-10-13 17:46:17.710898] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:28.117 [2024-10-13 17:46:17.710909] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d4c2ed5f-07d7-4796-8265-26bed90c1e59 00:16:28.117 [2024-10-13 17:46:17.710918] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:28.117 [2024-10-13 17:46:17.710930] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:28.118 [2024-10-13 17:46:17.710937] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:28.118 [2024-10-13 17:46:17.710948] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:28.118 [2024-10-13 17:46:17.710958] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:28.118 [2024-10-13 17:46:17.710969] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:28.118 [2024-10-13 17:46:17.710976] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:28.118 [2024-10-13 17:46:17.710987] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:28.118 [2024-10-13 17:46:17.710994] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:28.118 [2024-10-13 17:46:17.711005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.118 [2024-10-13 17:46:17.711013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:28.118 [2024-10-13 17:46:17.711024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.124 ms 00:16:28.118 [2024-10-13 17:46:17.711031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.118 [2024-10-13 17:46:17.725538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.118 [2024-10-13 17:46:17.725611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:28.118 [2024-10-13 17:46:17.725630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.465 ms 00:16:28.118 [2024-10-13 17:46:17.725638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.118 [2024-10-13 17:46:17.726072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.118 [2024-10-13 17:46:17.726089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:28.118 [2024-10-13 17:46:17.726101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:16:28.118 [2024-10-13 17:46:17.726110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.118 [2024-10-13 17:46:17.767884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.118 [2024-10-13 17:46:17.768074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:28.118 [2024-10-13 17:46:17.768138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.118 [2024-10-13 17:46:17.768147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.118 [2024-10-13 17:46:17.768240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.118 [2024-10-13 17:46:17.768250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:28.118 [2024-10-13 17:46:17.768261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.118 [2024-10-13 17:46:17.768269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.118 [2024-10-13 17:46:17.768380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.118 [2024-10-13 17:46:17.768393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:28.118 [2024-10-13 17:46:17.768404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.118 [2024-10-13 17:46:17.768415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.118 [2024-10-13 17:46:17.768436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.118 [2024-10-13 17:46:17.768445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:28.118 [2024-10-13 17:46:17.768455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.118 [2024-10-13 17:46:17.768463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.118 [2024-10-13 17:46:17.860297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.118 [2024-10-13 17:46:17.860371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:28.118 [2024-10-13 17:46:17.860396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.118 [2024-10-13 17:46:17.860405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.378 [2024-10-13 17:46:17.934913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.378 [2024-10-13 17:46:17.935200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:28.378 [2024-10-13 17:46:17.935229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.378 [2024-10-13 17:46:17.935239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.378 [2024-10-13 17:46:17.935370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.378 [2024-10-13 17:46:17.935383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:28.379 [2024-10-13 17:46:17.935395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.379 [2024-10-13 17:46:17.935404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.379 [2024-10-13 17:46:17.935492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.379 [2024-10-13 17:46:17.935503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:28.379 [2024-10-13 17:46:17.935515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.379 [2024-10-13 17:46:17.935523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.379 [2024-10-13 17:46:17.935671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.379 [2024-10-13 17:46:17.935684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:28.379 [2024-10-13 17:46:17.935700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.379 [2024-10-13 17:46:17.935708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.379 [2024-10-13 17:46:17.935750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.379 [2024-10-13 17:46:17.935760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:28.379 [2024-10-13 17:46:17.935772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.379 [2024-10-13 17:46:17.935780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.379 [2024-10-13 17:46:17.935838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.379 [2024-10-13 17:46:17.935848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:28.379 [2024-10-13 17:46:17.935860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.379 [2024-10-13 17:46:17.935868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.379 [2024-10-13 17:46:17.935935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.379 [2024-10-13 17:46:17.935957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:28.379 [2024-10-13 17:46:17.935970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.379 [2024-10-13 17:46:17.935978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.379 [2024-10-13 17:46:17.936154] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 599.186 ms, result 0 00:16:28.379 true 00:16:28.379 17:46:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 73711 00:16:28.379 17:46:17 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 73711 ']' 00:16:28.379 17:46:17 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 73711 00:16:28.379 17:46:17 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:16:28.379 17:46:17 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:28.379 17:46:17 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73711 00:16:28.379 killing process with pid 73711 00:16:28.379 Received shutdown signal, test time was about 4.000000 seconds 00:16:28.379 00:16:28.379 Latency(us) 00:16:28.379 [2024-10-13T17:46:18.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.379 [2024-10-13T17:46:18.193Z] =================================================================================================================== 00:16:28.379 [2024-10-13T17:46:18.193Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:28.379 17:46:18 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:28.379 17:46:18 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:28.379 17:46:18 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73711' 00:16:28.379 17:46:18 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 73711 00:16:28.379 17:46:18 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 73711 00:16:30.926 Remove shared memory files 00:16:30.926 17:46:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:30.926 17:46:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:16:30.926 17:46:20 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:30.926 17:46:20 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:16:30.926 17:46:20 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:16:30.926 17:46:20 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:16:30.926 17:46:20 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:30.926 17:46:20 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:16:30.926 ************************************ 00:16:30.926 END TEST ftl_bdevperf 00:16:30.926 ************************************ 00:16:30.926 00:16:30.926 real 0m25.003s 00:16:30.926 user 0m27.534s 00:16:30.926 sys 0m1.103s 00:16:30.926 17:46:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:30.926 17:46:20 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:30.926 17:46:20 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:30.926 17:46:20 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:30.926 17:46:20 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:30.926 17:46:20 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:30.926 ************************************ 00:16:30.926 START TEST ftl_trim 00:16:30.926 ************************************ 00:16:30.926 17:46:20 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:30.926 * Looking for test storage... 00:16:30.926 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:30.926 17:46:20 ftl.ftl_trim -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:30.926 17:46:20 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lcov --version 00:16:30.926 17:46:20 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:30.926 17:46:20 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:30.926 17:46:20 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:30.926 17:46:20 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:30.926 17:46:20 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:30.926 17:46:20 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:16:30.926 17:46:20 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:16:30.926 17:46:20 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:16:30.926 17:46:20 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:16:30.926 17:46:20 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:16:30.926 17:46:20 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:16:30.926 17:46:20 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:16:30.926 17:46:20 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:30.926 17:46:20 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:16:30.926 17:46:20 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:16:30.926 17:46:20 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:30.926 17:46:20 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:30.926 17:46:20 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:16:30.926 17:46:20 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:16:30.926 17:46:20 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:30.926 17:46:20 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:16:30.926 17:46:20 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:16:30.926 17:46:20 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:16:30.926 17:46:20 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:16:30.926 17:46:20 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:30.926 17:46:20 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:16:30.926 17:46:20 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:16:30.926 17:46:20 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:30.926 17:46:20 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:30.926 17:46:20 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:16:30.926 17:46:20 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:30.926 17:46:20 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:30.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.926 --rc genhtml_branch_coverage=1 00:16:30.926 --rc genhtml_function_coverage=1 00:16:30.926 --rc genhtml_legend=1 00:16:30.926 --rc geninfo_all_blocks=1 00:16:30.926 --rc geninfo_unexecuted_blocks=1 00:16:30.926 00:16:30.926 ' 00:16:30.926 17:46:20 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:30.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.927 --rc genhtml_branch_coverage=1 00:16:30.927 --rc genhtml_function_coverage=1 00:16:30.927 --rc genhtml_legend=1 00:16:30.927 --rc geninfo_all_blocks=1 00:16:30.927 --rc geninfo_unexecuted_blocks=1 00:16:30.927 00:16:30.927 ' 00:16:30.927 17:46:20 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:30.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.927 --rc genhtml_branch_coverage=1 00:16:30.927 --rc genhtml_function_coverage=1 00:16:30.927 --rc genhtml_legend=1 00:16:30.927 --rc geninfo_all_blocks=1 00:16:30.927 --rc geninfo_unexecuted_blocks=1 00:16:30.927 00:16:30.927 ' 00:16:30.927 17:46:20 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:30.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.927 --rc genhtml_branch_coverage=1 00:16:30.927 --rc genhtml_function_coverage=1 00:16:30.927 --rc genhtml_legend=1 00:16:30.927 --rc geninfo_all_blocks=1 00:16:30.927 --rc geninfo_unexecuted_blocks=1 00:16:30.927 00:16:30.927 ' 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:30.927 17:46:20 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:31.188 17:46:20 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:31.188 17:46:20 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=74087 00:16:31.188 17:46:20 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:31.188 17:46:20 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 74087 00:16:31.188 17:46:20 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 74087 ']' 00:16:31.188 17:46:20 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.188 17:46:20 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:31.188 17:46:20 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.188 17:46:20 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:31.188 17:46:20 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:31.188 [2024-10-13 17:46:20.841259] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:16:31.188 [2024-10-13 17:46:20.841691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74087 ] 00:16:31.188 [2024-10-13 17:46:21.001423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:31.449 [2024-10-13 17:46:21.151883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.449 [2024-10-13 17:46:21.152188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.449 [2024-10-13 17:46:21.152258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:32.393 17:46:21 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:32.393 17:46:21 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:16:32.393 17:46:21 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:32.393 17:46:21 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:16:32.393 17:46:21 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:32.393 17:46:21 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:16:32.393 17:46:21 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:16:32.393 17:46:21 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:32.655 17:46:22 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:32.655 17:46:22 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:16:32.655 17:46:22 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:32.655 17:46:22 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:32.655 17:46:22 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:32.655 17:46:22 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:32.655 17:46:22 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:32.655 17:46:22 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:32.916 17:46:22 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:32.916 { 00:16:32.916 "name": "nvme0n1", 00:16:32.916 "aliases": [ 00:16:32.916 "8d826821-7ce5-4fdf-b3b9-7c731467aa11" 00:16:32.916 ], 00:16:32.916 "product_name": "NVMe disk", 00:16:32.916 "block_size": 4096, 00:16:32.916 "num_blocks": 1310720, 00:16:32.916 "uuid": "8d826821-7ce5-4fdf-b3b9-7c731467aa11", 00:16:32.916 "numa_id": -1, 00:16:32.916 "assigned_rate_limits": { 00:16:32.916 "rw_ios_per_sec": 0, 00:16:32.916 "rw_mbytes_per_sec": 0, 00:16:32.916 "r_mbytes_per_sec": 0, 00:16:32.916 "w_mbytes_per_sec": 0 00:16:32.916 }, 00:16:32.916 "claimed": true, 00:16:32.916 "claim_type": "read_many_write_one", 00:16:32.916 "zoned": false, 00:16:32.916 "supported_io_types": { 00:16:32.916 "read": true, 00:16:32.916 "write": true, 00:16:32.917 "unmap": true, 00:16:32.917 "flush": true, 00:16:32.917 "reset": true, 00:16:32.917 "nvme_admin": true, 00:16:32.917 "nvme_io": true, 00:16:32.917 "nvme_io_md": false, 00:16:32.917 "write_zeroes": true, 00:16:32.917 "zcopy": false, 00:16:32.917 "get_zone_info": false, 00:16:32.917 "zone_management": false, 00:16:32.917 "zone_append": false, 00:16:32.917 "compare": true, 00:16:32.917 "compare_and_write": false, 00:16:32.917 "abort": true, 00:16:32.917 "seek_hole": false, 00:16:32.917 "seek_data": false, 00:16:32.917 "copy": true, 00:16:32.917 "nvme_iov_md": false 00:16:32.917 }, 00:16:32.917 "driver_specific": { 00:16:32.917 "nvme": [ 00:16:32.917 { 00:16:32.917 "pci_address": "0000:00:11.0", 00:16:32.917 "trid": { 00:16:32.917 "trtype": "PCIe", 00:16:32.917 "traddr": "0000:00:11.0" 00:16:32.917 }, 00:16:32.917 "ctrlr_data": { 00:16:32.917 "cntlid": 0, 00:16:32.917 "vendor_id": "0x1b36", 00:16:32.917 "model_number": "QEMU NVMe Ctrl", 00:16:32.917 "serial_number": "12341", 00:16:32.917 "firmware_revision": "8.0.0", 00:16:32.917 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:32.917 "oacs": { 00:16:32.917 "security": 0, 00:16:32.917 "format": 1, 00:16:32.917 "firmware": 0, 00:16:32.917 "ns_manage": 1 00:16:32.917 }, 00:16:32.917 "multi_ctrlr": false, 00:16:32.917 "ana_reporting": false 00:16:32.917 }, 00:16:32.917 "vs": { 00:16:32.917 "nvme_version": "1.4" 00:16:32.917 }, 00:16:32.917 "ns_data": { 00:16:32.917 "id": 1, 00:16:32.917 "can_share": false 00:16:32.917 } 00:16:32.917 } 00:16:32.917 ], 00:16:32.917 "mp_policy": "active_passive" 00:16:32.917 } 00:16:32.917 } 00:16:32.917 ]' 00:16:32.917 17:46:22 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:32.917 17:46:22 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:32.917 17:46:22 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:32.917 17:46:22 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:32.917 17:46:22 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:32.917 17:46:22 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:16:32.917 17:46:22 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:16:32.917 17:46:22 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:32.917 17:46:22 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:16:32.917 17:46:22 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:32.917 17:46:22 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:33.178 17:46:22 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=3888edce-4b5d-4126-b771-5c8e0407c7ad 00:16:33.178 17:46:22 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:16:33.178 17:46:22 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3888edce-4b5d-4126-b771-5c8e0407c7ad 00:16:33.178 17:46:22 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:33.440 17:46:23 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=82345f5c-b25b-4818-a815-1fc06212f051 00:16:33.440 17:46:23 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 82345f5c-b25b-4818-a815-1fc06212f051 00:16:33.701 17:46:23 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=904852cf-1138-4052-9924-9c01f5ae451a 00:16:33.701 17:46:23 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 904852cf-1138-4052-9924-9c01f5ae451a 00:16:33.701 17:46:23 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:16:33.701 17:46:23 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:33.701 17:46:23 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=904852cf-1138-4052-9924-9c01f5ae451a 00:16:33.701 17:46:23 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:16:33.701 17:46:23 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 904852cf-1138-4052-9924-9c01f5ae451a 00:16:33.701 17:46:23 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=904852cf-1138-4052-9924-9c01f5ae451a 00:16:33.701 17:46:23 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:33.701 17:46:23 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:33.701 17:46:23 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:33.701 17:46:23 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 904852cf-1138-4052-9924-9c01f5ae451a 00:16:33.962 17:46:23 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:33.962 { 00:16:33.962 "name": "904852cf-1138-4052-9924-9c01f5ae451a", 00:16:33.962 "aliases": [ 00:16:33.962 "lvs/nvme0n1p0" 00:16:33.962 ], 00:16:33.962 "product_name": "Logical Volume", 00:16:33.962 "block_size": 4096, 00:16:33.962 "num_blocks": 26476544, 00:16:33.962 "uuid": "904852cf-1138-4052-9924-9c01f5ae451a", 00:16:33.962 "assigned_rate_limits": { 00:16:33.962 "rw_ios_per_sec": 0, 00:16:33.962 "rw_mbytes_per_sec": 0, 00:16:33.962 "r_mbytes_per_sec": 0, 00:16:33.962 "w_mbytes_per_sec": 0 00:16:33.962 }, 00:16:33.962 "claimed": false, 00:16:33.962 "zoned": false, 00:16:33.962 "supported_io_types": { 00:16:33.962 "read": true, 00:16:33.962 "write": true, 00:16:33.962 "unmap": true, 00:16:33.962 "flush": false, 00:16:33.962 "reset": true, 00:16:33.962 "nvme_admin": false, 00:16:33.962 "nvme_io": false, 00:16:33.962 "nvme_io_md": false, 00:16:33.962 "write_zeroes": true, 00:16:33.962 "zcopy": false, 00:16:33.962 "get_zone_info": false, 00:16:33.962 "zone_management": false, 00:16:33.962 "zone_append": false, 00:16:33.962 "compare": false, 00:16:33.962 "compare_and_write": false, 00:16:33.962 "abort": false, 00:16:33.962 "seek_hole": true, 00:16:33.962 "seek_data": true, 00:16:33.962 "copy": false, 00:16:33.962 "nvme_iov_md": false 00:16:33.962 }, 00:16:33.962 "driver_specific": { 00:16:33.962 "lvol": { 00:16:33.962 "lvol_store_uuid": "82345f5c-b25b-4818-a815-1fc06212f051", 00:16:33.962 "base_bdev": "nvme0n1", 00:16:33.962 "thin_provision": true, 00:16:33.962 "num_allocated_clusters": 0, 00:16:33.962 "snapshot": false, 00:16:33.962 "clone": false, 00:16:33.962 "esnap_clone": false 00:16:33.962 } 00:16:33.962 } 00:16:33.962 } 00:16:33.962 ]' 00:16:33.962 17:46:23 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:33.962 17:46:23 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:33.962 17:46:23 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:33.962 17:46:23 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:33.962 17:46:23 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:33.962 17:46:23 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:33.962 17:46:23 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:16:33.962 17:46:23 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:16:33.962 17:46:23 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:34.222 17:46:23 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:34.223 17:46:23 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:34.223 17:46:23 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 904852cf-1138-4052-9924-9c01f5ae451a 00:16:34.223 17:46:23 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=904852cf-1138-4052-9924-9c01f5ae451a 00:16:34.223 17:46:23 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:34.223 17:46:23 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:34.223 17:46:23 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:34.223 17:46:23 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 904852cf-1138-4052-9924-9c01f5ae451a 00:16:34.483 17:46:24 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:34.483 { 00:16:34.483 "name": "904852cf-1138-4052-9924-9c01f5ae451a", 00:16:34.483 "aliases": [ 00:16:34.483 "lvs/nvme0n1p0" 00:16:34.483 ], 00:16:34.483 "product_name": "Logical Volume", 00:16:34.483 "block_size": 4096, 00:16:34.483 "num_blocks": 26476544, 00:16:34.483 "uuid": "904852cf-1138-4052-9924-9c01f5ae451a", 00:16:34.483 "assigned_rate_limits": { 00:16:34.483 "rw_ios_per_sec": 0, 00:16:34.483 "rw_mbytes_per_sec": 0, 00:16:34.483 "r_mbytes_per_sec": 0, 00:16:34.483 "w_mbytes_per_sec": 0 00:16:34.483 }, 00:16:34.483 "claimed": false, 00:16:34.483 "zoned": false, 00:16:34.483 "supported_io_types": { 00:16:34.483 "read": true, 00:16:34.483 "write": true, 00:16:34.483 "unmap": true, 00:16:34.483 "flush": false, 00:16:34.483 "reset": true, 00:16:34.483 "nvme_admin": false, 00:16:34.483 "nvme_io": false, 00:16:34.483 "nvme_io_md": false, 00:16:34.483 "write_zeroes": true, 00:16:34.483 "zcopy": false, 00:16:34.483 "get_zone_info": false, 00:16:34.483 "zone_management": false, 00:16:34.483 "zone_append": false, 00:16:34.483 "compare": false, 00:16:34.483 "compare_and_write": false, 00:16:34.483 "abort": false, 00:16:34.483 "seek_hole": true, 00:16:34.483 "seek_data": true, 00:16:34.483 "copy": false, 00:16:34.483 "nvme_iov_md": false 00:16:34.483 }, 00:16:34.483 "driver_specific": { 00:16:34.483 "lvol": { 00:16:34.483 "lvol_store_uuid": "82345f5c-b25b-4818-a815-1fc06212f051", 00:16:34.483 "base_bdev": "nvme0n1", 00:16:34.483 "thin_provision": true, 00:16:34.483 "num_allocated_clusters": 0, 00:16:34.483 "snapshot": false, 00:16:34.483 "clone": false, 00:16:34.483 "esnap_clone": false 00:16:34.483 } 00:16:34.483 } 00:16:34.483 } 00:16:34.483 ]' 00:16:34.483 17:46:24 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:34.483 17:46:24 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:34.483 17:46:24 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:34.483 17:46:24 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:34.483 17:46:24 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:34.483 17:46:24 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:34.483 17:46:24 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:16:34.483 17:46:24 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:34.744 17:46:24 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:34.744 17:46:24 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:34.744 17:46:24 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 904852cf-1138-4052-9924-9c01f5ae451a 00:16:34.744 17:46:24 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=904852cf-1138-4052-9924-9c01f5ae451a 00:16:34.744 17:46:24 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:34.744 17:46:24 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:34.744 17:46:24 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:34.744 17:46:24 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 904852cf-1138-4052-9924-9c01f5ae451a 00:16:35.005 17:46:24 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:35.005 { 00:16:35.005 "name": "904852cf-1138-4052-9924-9c01f5ae451a", 00:16:35.005 "aliases": [ 00:16:35.005 "lvs/nvme0n1p0" 00:16:35.005 ], 00:16:35.005 "product_name": "Logical Volume", 00:16:35.005 "block_size": 4096, 00:16:35.005 "num_blocks": 26476544, 00:16:35.005 "uuid": "904852cf-1138-4052-9924-9c01f5ae451a", 00:16:35.005 "assigned_rate_limits": { 00:16:35.005 "rw_ios_per_sec": 0, 00:16:35.005 "rw_mbytes_per_sec": 0, 00:16:35.005 "r_mbytes_per_sec": 0, 00:16:35.005 "w_mbytes_per_sec": 0 00:16:35.005 }, 00:16:35.005 "claimed": false, 00:16:35.005 "zoned": false, 00:16:35.005 "supported_io_types": { 00:16:35.005 "read": true, 00:16:35.005 "write": true, 00:16:35.005 "unmap": true, 00:16:35.005 "flush": false, 00:16:35.005 "reset": true, 00:16:35.005 "nvme_admin": false, 00:16:35.005 "nvme_io": false, 00:16:35.005 "nvme_io_md": false, 00:16:35.005 "write_zeroes": true, 00:16:35.005 "zcopy": false, 00:16:35.005 "get_zone_info": false, 00:16:35.005 "zone_management": false, 00:16:35.005 "zone_append": false, 00:16:35.005 "compare": false, 00:16:35.005 "compare_and_write": false, 00:16:35.005 "abort": false, 00:16:35.005 "seek_hole": true, 00:16:35.005 "seek_data": true, 00:16:35.005 "copy": false, 00:16:35.005 "nvme_iov_md": false 00:16:35.005 }, 00:16:35.005 "driver_specific": { 00:16:35.005 "lvol": { 00:16:35.005 "lvol_store_uuid": "82345f5c-b25b-4818-a815-1fc06212f051", 00:16:35.005 "base_bdev": "nvme0n1", 00:16:35.005 "thin_provision": true, 00:16:35.005 "num_allocated_clusters": 0, 00:16:35.005 "snapshot": false, 00:16:35.005 "clone": false, 00:16:35.005 "esnap_clone": false 00:16:35.005 } 00:16:35.005 } 00:16:35.005 } 00:16:35.005 ]' 00:16:35.005 17:46:24 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:35.005 17:46:24 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:35.005 17:46:24 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:35.005 17:46:24 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:35.005 17:46:24 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:35.005 17:46:24 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:35.005 17:46:24 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:35.005 17:46:24 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 904852cf-1138-4052-9924-9c01f5ae451a -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:35.267 [2024-10-13 17:46:24.898347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.267 [2024-10-13 17:46:24.898389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:35.267 [2024-10-13 17:46:24.898404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:35.267 [2024-10-13 17:46:24.898410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.267 [2024-10-13 17:46:24.900800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.267 [2024-10-13 17:46:24.900908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:35.267 [2024-10-13 17:46:24.900925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.368 ms 00:16:35.267 [2024-10-13 17:46:24.900932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.267 [2024-10-13 17:46:24.901025] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:35.267 [2024-10-13 17:46:24.901599] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:35.267 [2024-10-13 17:46:24.901621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.267 [2024-10-13 17:46:24.901628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:35.267 [2024-10-13 17:46:24.901637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.602 ms 00:16:35.267 [2024-10-13 17:46:24.901643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.267 [2024-10-13 17:46:24.901719] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 74767a50-ba29-46ab-9b2a-5c33f6d9a290 00:16:35.267 [2024-10-13 17:46:24.903010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.267 [2024-10-13 17:46:24.903036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:35.267 [2024-10-13 17:46:24.903047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:16:35.267 [2024-10-13 17:46:24.903057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.267 [2024-10-13 17:46:24.909923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.267 [2024-10-13 17:46:24.909952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:35.267 [2024-10-13 17:46:24.909960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.804 ms 00:16:35.267 [2024-10-13 17:46:24.909968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.267 [2024-10-13 17:46:24.910064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.267 [2024-10-13 17:46:24.910079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:35.267 [2024-10-13 17:46:24.910086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:16:35.267 [2024-10-13 17:46:24.910097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.267 [2024-10-13 17:46:24.910131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.267 [2024-10-13 17:46:24.910139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:35.267 [2024-10-13 17:46:24.910145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:35.267 [2024-10-13 17:46:24.910153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.267 [2024-10-13 17:46:24.910178] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:35.267 [2024-10-13 17:46:24.913423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.267 [2024-10-13 17:46:24.913449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:35.267 [2024-10-13 17:46:24.913458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.248 ms 00:16:35.267 [2024-10-13 17:46:24.913464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.267 [2024-10-13 17:46:24.913519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.267 [2024-10-13 17:46:24.913526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:35.267 [2024-10-13 17:46:24.913534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:35.267 [2024-10-13 17:46:24.913553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.267 [2024-10-13 17:46:24.913587] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:35.267 [2024-10-13 17:46:24.913700] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:35.267 [2024-10-13 17:46:24.913714] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:35.267 [2024-10-13 17:46:24.913723] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:35.267 [2024-10-13 17:46:24.913734] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:35.267 [2024-10-13 17:46:24.913741] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:35.267 [2024-10-13 17:46:24.913749] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:35.267 [2024-10-13 17:46:24.913755] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:35.267 [2024-10-13 17:46:24.913761] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:35.267 [2024-10-13 17:46:24.913767] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:35.267 [2024-10-13 17:46:24.913775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.267 [2024-10-13 17:46:24.913781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:35.267 [2024-10-13 17:46:24.913790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:16:35.267 [2024-10-13 17:46:24.913796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.267 [2024-10-13 17:46:24.913873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.267 [2024-10-13 17:46:24.913880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:35.267 [2024-10-13 17:46:24.913887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:35.267 [2024-10-13 17:46:24.913894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.267 [2024-10-13 17:46:24.913985] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:35.267 [2024-10-13 17:46:24.913993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:35.267 [2024-10-13 17:46:24.914001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:35.267 [2024-10-13 17:46:24.914009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:35.267 [2024-10-13 17:46:24.914017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:35.267 [2024-10-13 17:46:24.914022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:35.267 [2024-10-13 17:46:24.914028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:35.267 [2024-10-13 17:46:24.914033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:35.267 [2024-10-13 17:46:24.914039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:35.267 [2024-10-13 17:46:24.914044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:35.267 [2024-10-13 17:46:24.914051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:35.267 [2024-10-13 17:46:24.914056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:35.267 [2024-10-13 17:46:24.914062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:35.267 [2024-10-13 17:46:24.914066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:35.267 [2024-10-13 17:46:24.914073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:35.267 [2024-10-13 17:46:24.914078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:35.267 [2024-10-13 17:46:24.914086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:35.267 [2024-10-13 17:46:24.914090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:35.267 [2024-10-13 17:46:24.914098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:35.267 [2024-10-13 17:46:24.914104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:35.267 [2024-10-13 17:46:24.914112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:35.267 [2024-10-13 17:46:24.914117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:35.267 [2024-10-13 17:46:24.914124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:35.267 [2024-10-13 17:46:24.914129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:35.267 [2024-10-13 17:46:24.914135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:35.267 [2024-10-13 17:46:24.914140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:35.267 [2024-10-13 17:46:24.914146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:35.267 [2024-10-13 17:46:24.914152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:35.268 [2024-10-13 17:46:24.914158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:35.268 [2024-10-13 17:46:24.914163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:35.268 [2024-10-13 17:46:24.914170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:35.268 [2024-10-13 17:46:24.914175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:35.268 [2024-10-13 17:46:24.914183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:35.268 [2024-10-13 17:46:24.914187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:35.268 [2024-10-13 17:46:24.914194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:35.268 [2024-10-13 17:46:24.914199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:35.268 [2024-10-13 17:46:24.914205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:35.268 [2024-10-13 17:46:24.914210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:35.268 [2024-10-13 17:46:24.914217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:35.268 [2024-10-13 17:46:24.914221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:35.268 [2024-10-13 17:46:24.914228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:35.268 [2024-10-13 17:46:24.914232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:35.268 [2024-10-13 17:46:24.914239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:35.268 [2024-10-13 17:46:24.914243] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:35.268 [2024-10-13 17:46:24.914251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:35.268 [2024-10-13 17:46:24.914257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:35.268 [2024-10-13 17:46:24.914263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:35.268 [2024-10-13 17:46:24.914269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:35.268 [2024-10-13 17:46:24.914278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:35.268 [2024-10-13 17:46:24.914283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:35.268 [2024-10-13 17:46:24.914291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:35.268 [2024-10-13 17:46:24.914296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:35.268 [2024-10-13 17:46:24.914303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:35.268 [2024-10-13 17:46:24.914311] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:35.268 [2024-10-13 17:46:24.914320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:35.268 [2024-10-13 17:46:24.914327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:35.268 [2024-10-13 17:46:24.914334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:35.268 [2024-10-13 17:46:24.914339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:35.268 [2024-10-13 17:46:24.914346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:35.268 [2024-10-13 17:46:24.914352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:35.268 [2024-10-13 17:46:24.914358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:35.268 [2024-10-13 17:46:24.914364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:35.268 [2024-10-13 17:46:24.914371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:35.268 [2024-10-13 17:46:24.914376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:35.268 [2024-10-13 17:46:24.914384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:35.268 [2024-10-13 17:46:24.914390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:35.268 [2024-10-13 17:46:24.914396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:35.268 [2024-10-13 17:46:24.914402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:35.268 [2024-10-13 17:46:24.914409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:35.268 [2024-10-13 17:46:24.914415] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:35.268 [2024-10-13 17:46:24.914422] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:35.268 [2024-10-13 17:46:24.914429] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:35.268 [2024-10-13 17:46:24.914437] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:35.268 [2024-10-13 17:46:24.914443] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:35.268 [2024-10-13 17:46:24.914450] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:35.268 [2024-10-13 17:46:24.914457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.268 [2024-10-13 17:46:24.914467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:35.268 [2024-10-13 17:46:24.914473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:16:35.268 [2024-10-13 17:46:24.914481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.268 [2024-10-13 17:46:24.914573] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:35.268 [2024-10-13 17:46:24.914589] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:38.601 [2024-10-13 17:46:27.954635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.601 [2024-10-13 17:46:27.954697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:38.601 [2024-10-13 17:46:27.954716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3040.050 ms 00:16:38.602 [2024-10-13 17:46:27.954727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.602 [2024-10-13 17:46:27.982944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.602 [2024-10-13 17:46:27.982994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:38.602 [2024-10-13 17:46:27.983008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.974 ms 00:16:38.602 [2024-10-13 17:46:27.983019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.602 [2024-10-13 17:46:27.983139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.602 [2024-10-13 17:46:27.983151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:38.602 [2024-10-13 17:46:27.983160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:16:38.602 [2024-10-13 17:46:27.983329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.602 [2024-10-13 17:46:28.027942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.602 [2024-10-13 17:46:28.028109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:38.602 [2024-10-13 17:46:28.028133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.553 ms 00:16:38.602 [2024-10-13 17:46:28.028151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.602 [2024-10-13 17:46:28.028277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.602 [2024-10-13 17:46:28.028294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:38.602 [2024-10-13 17:46:28.028304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:38.602 [2024-10-13 17:46:28.028315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.602 [2024-10-13 17:46:28.028770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.602 [2024-10-13 17:46:28.028791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:38.602 [2024-10-13 17:46:28.028802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:16:38.602 [2024-10-13 17:46:28.028813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.602 [2024-10-13 17:46:28.028942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.602 [2024-10-13 17:46:28.028955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:38.602 [2024-10-13 17:46:28.028964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:16:38.602 [2024-10-13 17:46:28.028976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.602 [2024-10-13 17:46:28.045550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.602 [2024-10-13 17:46:28.045597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:38.602 [2024-10-13 17:46:28.045607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.527 ms 00:16:38.602 [2024-10-13 17:46:28.045617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.602 [2024-10-13 17:46:28.058031] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:38.602 [2024-10-13 17:46:28.075436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.602 [2024-10-13 17:46:28.075574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:38.602 [2024-10-13 17:46:28.075595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.721 ms 00:16:38.602 [2024-10-13 17:46:28.075606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.602 [2024-10-13 17:46:28.144982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.602 [2024-10-13 17:46:28.145021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:38.602 [2024-10-13 17:46:28.145040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.309 ms 00:16:38.602 [2024-10-13 17:46:28.145048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.602 [2024-10-13 17:46:28.145253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.602 [2024-10-13 17:46:28.145264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:38.602 [2024-10-13 17:46:28.145277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:16:38.602 [2024-10-13 17:46:28.145284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.602 [2024-10-13 17:46:28.168663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.602 [2024-10-13 17:46:28.168695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:38.602 [2024-10-13 17:46:28.168711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.348 ms 00:16:38.602 [2024-10-13 17:46:28.168719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.602 [2024-10-13 17:46:28.191088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.602 [2024-10-13 17:46:28.191209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:38.602 [2024-10-13 17:46:28.191230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.312 ms 00:16:38.602 [2024-10-13 17:46:28.191237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.602 [2024-10-13 17:46:28.191869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.602 [2024-10-13 17:46:28.191889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:38.602 [2024-10-13 17:46:28.191900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.576 ms 00:16:38.602 [2024-10-13 17:46:28.191907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.602 [2024-10-13 17:46:28.261992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.602 [2024-10-13 17:46:28.262112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:38.602 [2024-10-13 17:46:28.262134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.047 ms 00:16:38.602 [2024-10-13 17:46:28.262144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.602 [2024-10-13 17:46:28.286506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.602 [2024-10-13 17:46:28.286538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:38.602 [2024-10-13 17:46:28.286551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.279 ms 00:16:38.602 [2024-10-13 17:46:28.286571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.602 [2024-10-13 17:46:28.309203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.602 [2024-10-13 17:46:28.309343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:38.602 [2024-10-13 17:46:28.309361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.589 ms 00:16:38.602 [2024-10-13 17:46:28.309369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.602 [2024-10-13 17:46:28.332540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.602 [2024-10-13 17:46:28.332659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:38.602 [2024-10-13 17:46:28.332677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.112 ms 00:16:38.602 [2024-10-13 17:46:28.332698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.602 [2024-10-13 17:46:28.332760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.602 [2024-10-13 17:46:28.332769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:38.602 [2024-10-13 17:46:28.332782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:38.602 [2024-10-13 17:46:28.332792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.602 [2024-10-13 17:46:28.332877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.602 [2024-10-13 17:46:28.332885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:38.602 [2024-10-13 17:46:28.332896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:16:38.602 [2024-10-13 17:46:28.332903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.602 [2024-10-13 17:46:28.333795] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:38.602 [2024-10-13 17:46:28.336755] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3435.127 ms, result 0 00:16:38.602 [2024-10-13 17:46:28.337534] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:38.602 { 00:16:38.602 "name": "ftl0", 00:16:38.602 "uuid": "74767a50-ba29-46ab-9b2a-5c33f6d9a290" 00:16:38.602 } 00:16:38.602 17:46:28 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:38.602 17:46:28 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:16:38.602 17:46:28 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:38.602 17:46:28 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:16:38.602 17:46:28 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:38.602 17:46:28 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:38.602 17:46:28 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:38.863 17:46:28 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:39.124 [ 00:16:39.124 { 00:16:39.124 "name": "ftl0", 00:16:39.124 "aliases": [ 00:16:39.124 "74767a50-ba29-46ab-9b2a-5c33f6d9a290" 00:16:39.124 ], 00:16:39.124 "product_name": "FTL disk", 00:16:39.124 "block_size": 4096, 00:16:39.124 "num_blocks": 23592960, 00:16:39.124 "uuid": "74767a50-ba29-46ab-9b2a-5c33f6d9a290", 00:16:39.124 "assigned_rate_limits": { 00:16:39.124 "rw_ios_per_sec": 0, 00:16:39.124 "rw_mbytes_per_sec": 0, 00:16:39.124 "r_mbytes_per_sec": 0, 00:16:39.124 "w_mbytes_per_sec": 0 00:16:39.124 }, 00:16:39.124 "claimed": false, 00:16:39.124 "zoned": false, 00:16:39.124 "supported_io_types": { 00:16:39.124 "read": true, 00:16:39.124 "write": true, 00:16:39.124 "unmap": true, 00:16:39.124 "flush": true, 00:16:39.124 "reset": false, 00:16:39.124 "nvme_admin": false, 00:16:39.124 "nvme_io": false, 00:16:39.124 "nvme_io_md": false, 00:16:39.124 "write_zeroes": true, 00:16:39.124 "zcopy": false, 00:16:39.124 "get_zone_info": false, 00:16:39.124 "zone_management": false, 00:16:39.124 "zone_append": false, 00:16:39.124 "compare": false, 00:16:39.124 "compare_and_write": false, 00:16:39.124 "abort": false, 00:16:39.124 "seek_hole": false, 00:16:39.124 "seek_data": false, 00:16:39.124 "copy": false, 00:16:39.124 "nvme_iov_md": false 00:16:39.124 }, 00:16:39.124 "driver_specific": { 00:16:39.124 "ftl": { 00:16:39.124 "base_bdev": "904852cf-1138-4052-9924-9c01f5ae451a", 00:16:39.124 "cache": "nvc0n1p0" 00:16:39.124 } 00:16:39.124 } 00:16:39.124 } 00:16:39.124 ] 00:16:39.124 17:46:28 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:16:39.124 17:46:28 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:39.124 17:46:28 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:39.124 17:46:28 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:16:39.124 17:46:28 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:39.385 17:46:29 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:39.385 { 00:16:39.385 "name": "ftl0", 00:16:39.385 "aliases": [ 00:16:39.385 "74767a50-ba29-46ab-9b2a-5c33f6d9a290" 00:16:39.385 ], 00:16:39.385 "product_name": "FTL disk", 00:16:39.385 "block_size": 4096, 00:16:39.385 "num_blocks": 23592960, 00:16:39.385 "uuid": "74767a50-ba29-46ab-9b2a-5c33f6d9a290", 00:16:39.385 "assigned_rate_limits": { 00:16:39.385 "rw_ios_per_sec": 0, 00:16:39.385 "rw_mbytes_per_sec": 0, 00:16:39.385 "r_mbytes_per_sec": 0, 00:16:39.385 "w_mbytes_per_sec": 0 00:16:39.385 }, 00:16:39.385 "claimed": false, 00:16:39.385 "zoned": false, 00:16:39.385 "supported_io_types": { 00:16:39.385 "read": true, 00:16:39.385 "write": true, 00:16:39.385 "unmap": true, 00:16:39.385 "flush": true, 00:16:39.385 "reset": false, 00:16:39.385 "nvme_admin": false, 00:16:39.385 "nvme_io": false, 00:16:39.385 "nvme_io_md": false, 00:16:39.385 "write_zeroes": true, 00:16:39.385 "zcopy": false, 00:16:39.385 "get_zone_info": false, 00:16:39.385 "zone_management": false, 00:16:39.385 "zone_append": false, 00:16:39.385 "compare": false, 00:16:39.385 "compare_and_write": false, 00:16:39.385 "abort": false, 00:16:39.385 "seek_hole": false, 00:16:39.385 "seek_data": false, 00:16:39.385 "copy": false, 00:16:39.385 "nvme_iov_md": false 00:16:39.385 }, 00:16:39.385 "driver_specific": { 00:16:39.385 "ftl": { 00:16:39.385 "base_bdev": "904852cf-1138-4052-9924-9c01f5ae451a", 00:16:39.385 "cache": "nvc0n1p0" 00:16:39.385 } 00:16:39.385 } 00:16:39.385 } 00:16:39.385 ]' 00:16:39.385 17:46:29 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:39.385 17:46:29 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:16:39.385 17:46:29 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:39.647 [2024-10-13 17:46:29.272856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.647 [2024-10-13 17:46:29.272903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:39.647 [2024-10-13 17:46:29.272917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:39.647 [2024-10-13 17:46:29.272928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.647 [2024-10-13 17:46:29.272973] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:39.647 [2024-10-13 17:46:29.275773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.647 [2024-10-13 17:46:29.275902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:39.647 [2024-10-13 17:46:29.275926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.781 ms 00:16:39.647 [2024-10-13 17:46:29.275935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.647 [2024-10-13 17:46:29.276532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.647 [2024-10-13 17:46:29.276548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:39.647 [2024-10-13 17:46:29.276572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:16:39.647 [2024-10-13 17:46:29.276580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.647 [2024-10-13 17:46:29.280255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.647 [2024-10-13 17:46:29.280278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:39.647 [2024-10-13 17:46:29.280290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.630 ms 00:16:39.647 [2024-10-13 17:46:29.280301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.647 [2024-10-13 17:46:29.287275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.647 [2024-10-13 17:46:29.287381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:39.647 [2024-10-13 17:46:29.287399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.929 ms 00:16:39.647 [2024-10-13 17:46:29.287408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.647 [2024-10-13 17:46:29.311864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.647 [2024-10-13 17:46:29.311971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:39.647 [2024-10-13 17:46:29.312027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.381 ms 00:16:39.647 [2024-10-13 17:46:29.312049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.647 [2024-10-13 17:46:29.327495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.647 [2024-10-13 17:46:29.327615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:39.647 [2024-10-13 17:46:29.327671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.374 ms 00:16:39.647 [2024-10-13 17:46:29.327694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.647 [2024-10-13 17:46:29.327914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.647 [2024-10-13 17:46:29.327977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:39.647 [2024-10-13 17:46:29.328001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:16:39.647 [2024-10-13 17:46:29.328048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.647 [2024-10-13 17:46:29.348411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.647 [2024-10-13 17:46:29.348497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:39.647 [2024-10-13 17:46:29.348538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.311 ms 00:16:39.648 [2024-10-13 17:46:29.348554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.648 [2024-10-13 17:46:29.366694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.648 [2024-10-13 17:46:29.366777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:39.648 [2024-10-13 17:46:29.366820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.068 ms 00:16:39.648 [2024-10-13 17:46:29.366836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.648 [2024-10-13 17:46:29.384761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.648 [2024-10-13 17:46:29.384843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:39.648 [2024-10-13 17:46:29.384884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.857 ms 00:16:39.648 [2024-10-13 17:46:29.384901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.648 [2024-10-13 17:46:29.402282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.648 [2024-10-13 17:46:29.402365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:39.648 [2024-10-13 17:46:29.402406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.240 ms 00:16:39.648 [2024-10-13 17:46:29.402423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.648 [2024-10-13 17:46:29.402481] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:39.648 [2024-10-13 17:46:29.402505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.402531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.402553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.402589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.402651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.402753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.402794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.402820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.402842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.402867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.402888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.402912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.402934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.402996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.403020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.403043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.403065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.403089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.403111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.403135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.403191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.403233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.403255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.403279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.403301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.403325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.403380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.403405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.403427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.403450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.403472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.403496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.403549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.403588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.403611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.403635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.403657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.403683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.403739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.403764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.403786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.403809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.403830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.403855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.403975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.403999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.404020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.404043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.404065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.404116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.404139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.404194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.404216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.404261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.404284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.404308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.404348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.404384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.404408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.404431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.404498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.404523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.404545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.404580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.404689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.404714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.404736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.404760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.404784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.404837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.404862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.404887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.404908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.404932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.404953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.405009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.405032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.405056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.405079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.405133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.405156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:39.648 [2024-10-13 17:46:29.405180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:39.649 [2024-10-13 17:46:29.405201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:39.649 [2024-10-13 17:46:29.405249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:39.649 [2024-10-13 17:46:29.405274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:39.649 [2024-10-13 17:46:29.405300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:39.649 [2024-10-13 17:46:29.405350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:39.649 [2024-10-13 17:46:29.405376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:39.649 [2024-10-13 17:46:29.405398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:39.649 [2024-10-13 17:46:29.405442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:39.649 [2024-10-13 17:46:29.405466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:39.649 [2024-10-13 17:46:29.405551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:39.649 [2024-10-13 17:46:29.405585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:39.649 [2024-10-13 17:46:29.405628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:39.649 [2024-10-13 17:46:29.405685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:39.649 [2024-10-13 17:46:29.405733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:39.649 [2024-10-13 17:46:29.405756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:39.649 [2024-10-13 17:46:29.405782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:39.649 [2024-10-13 17:46:29.405805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:39.649 [2024-10-13 17:46:29.405890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:39.649 [2024-10-13 17:46:29.405921] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:39.649 [2024-10-13 17:46:29.405941] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 74767a50-ba29-46ab-9b2a-5c33f6d9a290 00:16:39.649 [2024-10-13 17:46:29.405949] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:39.649 [2024-10-13 17:46:29.405956] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:39.649 [2024-10-13 17:46:29.405962] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:39.649 [2024-10-13 17:46:29.405970] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:39.649 [2024-10-13 17:46:29.405976] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:39.649 [2024-10-13 17:46:29.405986] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:39.649 [2024-10-13 17:46:29.405992] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:39.649 [2024-10-13 17:46:29.405999] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:39.649 [2024-10-13 17:46:29.406004] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:39.649 [2024-10-13 17:46:29.406011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.649 [2024-10-13 17:46:29.406017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:39.649 [2024-10-13 17:46:29.406025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.532 ms 00:16:39.649 [2024-10-13 17:46:29.406032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.649 [2024-10-13 17:46:29.416344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.649 [2024-10-13 17:46:29.416369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:39.649 [2024-10-13 17:46:29.416381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.272 ms 00:16:39.649 [2024-10-13 17:46:29.416390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.649 [2024-10-13 17:46:29.416729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.649 [2024-10-13 17:46:29.416741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:39.649 [2024-10-13 17:46:29.416750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:16:39.649 [2024-10-13 17:46:29.416756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.649 [2024-10-13 17:46:29.453172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:39.649 [2024-10-13 17:46:29.453203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:39.649 [2024-10-13 17:46:29.453213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:39.649 [2024-10-13 17:46:29.453221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.649 [2024-10-13 17:46:29.453315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:39.649 [2024-10-13 17:46:29.453322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:39.649 [2024-10-13 17:46:29.453330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:39.649 [2024-10-13 17:46:29.453336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.649 [2024-10-13 17:46:29.453404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:39.649 [2024-10-13 17:46:29.453412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:39.649 [2024-10-13 17:46:29.453422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:39.649 [2024-10-13 17:46:29.453428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.649 [2024-10-13 17:46:29.453462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:39.649 [2024-10-13 17:46:29.453469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:39.649 [2024-10-13 17:46:29.453477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:39.649 [2024-10-13 17:46:29.453483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.910 [2024-10-13 17:46:29.520355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:39.910 [2024-10-13 17:46:29.520393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:39.910 [2024-10-13 17:46:29.520405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:39.910 [2024-10-13 17:46:29.520416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.910 [2024-10-13 17:46:29.572059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:39.910 [2024-10-13 17:46:29.572095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:39.910 [2024-10-13 17:46:29.572107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:39.910 [2024-10-13 17:46:29.572113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.910 [2024-10-13 17:46:29.572206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:39.910 [2024-10-13 17:46:29.572215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:39.910 [2024-10-13 17:46:29.572239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:39.910 [2024-10-13 17:46:29.572245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.910 [2024-10-13 17:46:29.572305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:39.910 [2024-10-13 17:46:29.572313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:39.910 [2024-10-13 17:46:29.572321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:39.910 [2024-10-13 17:46:29.572327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.910 [2024-10-13 17:46:29.572425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:39.910 [2024-10-13 17:46:29.572433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:39.910 [2024-10-13 17:46:29.572441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:39.910 [2024-10-13 17:46:29.572446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.910 [2024-10-13 17:46:29.572499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:39.910 [2024-10-13 17:46:29.572509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:39.910 [2024-10-13 17:46:29.572517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:39.910 [2024-10-13 17:46:29.572523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.910 [2024-10-13 17:46:29.572584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:39.910 [2024-10-13 17:46:29.572592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:39.910 [2024-10-13 17:46:29.572604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:39.910 [2024-10-13 17:46:29.572610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.910 [2024-10-13 17:46:29.572667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:39.910 [2024-10-13 17:46:29.572676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:39.910 [2024-10-13 17:46:29.572684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:39.910 [2024-10-13 17:46:29.572690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.910 [2024-10-13 17:46:29.572858] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 299.997 ms, result 0 00:16:39.910 true 00:16:39.910 17:46:29 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 74087 00:16:39.910 17:46:29 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 74087 ']' 00:16:39.910 17:46:29 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 74087 00:16:39.910 17:46:29 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:16:39.911 17:46:29 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:39.911 17:46:29 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74087 00:16:39.911 killing process with pid 74087 00:16:39.911 17:46:29 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:39.911 17:46:29 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:39.911 17:46:29 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74087' 00:16:39.911 17:46:29 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 74087 00:16:39.911 17:46:29 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 74087 00:16:46.492 17:46:35 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:16:46.751 65536+0 records in 00:16:46.751 65536+0 records out 00:16:46.751 268435456 bytes (268 MB, 256 MiB) copied, 1.0967 s, 245 MB/s 00:16:46.751 17:46:36 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:47.011 [2024-10-13 17:46:36.625332] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:16:47.011 [2024-10-13 17:46:36.625504] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74274 ] 00:16:47.011 [2024-10-13 17:46:36.779232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.270 [2024-10-13 17:46:36.872184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.532 [2024-10-13 17:46:37.099621] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:47.532 [2024-10-13 17:46:37.099675] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:47.532 [2024-10-13 17:46:37.248990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.532 [2024-10-13 17:46:37.249031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:47.532 [2024-10-13 17:46:37.249042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:47.532 [2024-10-13 17:46:37.249049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.532 [2024-10-13 17:46:37.251183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.532 [2024-10-13 17:46:37.251213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:47.532 [2024-10-13 17:46:37.251221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.121 ms 00:16:47.532 [2024-10-13 17:46:37.251227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.532 [2024-10-13 17:46:37.251289] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:47.532 [2024-10-13 17:46:37.251941] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:47.532 [2024-10-13 17:46:37.252002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.532 [2024-10-13 17:46:37.252009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:47.532 [2024-10-13 17:46:37.252016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.719 ms 00:16:47.532 [2024-10-13 17:46:37.252022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.532 [2024-10-13 17:46:37.253290] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:47.532 [2024-10-13 17:46:37.263336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.532 [2024-10-13 17:46:37.263451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:47.532 [2024-10-13 17:46:37.263466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.046 ms 00:16:47.532 [2024-10-13 17:46:37.263477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.532 [2024-10-13 17:46:37.263545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.532 [2024-10-13 17:46:37.263554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:47.532 [2024-10-13 17:46:37.263579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:16:47.532 [2024-10-13 17:46:37.263585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.532 [2024-10-13 17:46:37.269714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.532 [2024-10-13 17:46:37.269810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:47.532 [2024-10-13 17:46:37.269822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.094 ms 00:16:47.532 [2024-10-13 17:46:37.269828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.532 [2024-10-13 17:46:37.269902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.532 [2024-10-13 17:46:37.269910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:47.532 [2024-10-13 17:46:37.269917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:16:47.532 [2024-10-13 17:46:37.269923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.532 [2024-10-13 17:46:37.269939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.532 [2024-10-13 17:46:37.269945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:47.532 [2024-10-13 17:46:37.269952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:47.532 [2024-10-13 17:46:37.269959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.532 [2024-10-13 17:46:37.269978] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:47.532 [2024-10-13 17:46:37.273120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.532 [2024-10-13 17:46:37.273210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:47.532 [2024-10-13 17:46:37.273223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.146 ms 00:16:47.532 [2024-10-13 17:46:37.273230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.532 [2024-10-13 17:46:37.273262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.532 [2024-10-13 17:46:37.273269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:47.532 [2024-10-13 17:46:37.273276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:47.532 [2024-10-13 17:46:37.273282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.532 [2024-10-13 17:46:37.273297] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:47.532 [2024-10-13 17:46:37.273314] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:47.532 [2024-10-13 17:46:37.273345] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:47.532 [2024-10-13 17:46:37.273357] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:47.532 [2024-10-13 17:46:37.273440] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:47.532 [2024-10-13 17:46:37.273448] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:47.532 [2024-10-13 17:46:37.273458] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:47.532 [2024-10-13 17:46:37.273466] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:47.532 [2024-10-13 17:46:37.273473] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:47.532 [2024-10-13 17:46:37.273479] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:47.532 [2024-10-13 17:46:37.273487] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:47.532 [2024-10-13 17:46:37.273493] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:47.532 [2024-10-13 17:46:37.273500] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:47.532 [2024-10-13 17:46:37.273506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.532 [2024-10-13 17:46:37.273512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:47.532 [2024-10-13 17:46:37.273519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:16:47.532 [2024-10-13 17:46:37.273524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.532 [2024-10-13 17:46:37.273618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.532 [2024-10-13 17:46:37.273627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:47.532 [2024-10-13 17:46:37.273633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:47.532 [2024-10-13 17:46:37.273642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.532 [2024-10-13 17:46:37.273719] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:47.532 [2024-10-13 17:46:37.273727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:47.532 [2024-10-13 17:46:37.273733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:47.532 [2024-10-13 17:46:37.273740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:47.532 [2024-10-13 17:46:37.273746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:47.532 [2024-10-13 17:46:37.273751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:47.532 [2024-10-13 17:46:37.273757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:47.532 [2024-10-13 17:46:37.273763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:47.532 [2024-10-13 17:46:37.273769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:47.532 [2024-10-13 17:46:37.273774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:47.532 [2024-10-13 17:46:37.273780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:47.532 [2024-10-13 17:46:37.273786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:47.532 [2024-10-13 17:46:37.273791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:47.532 [2024-10-13 17:46:37.273802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:47.532 [2024-10-13 17:46:37.273807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:47.532 [2024-10-13 17:46:37.273813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:47.532 [2024-10-13 17:46:37.273818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:47.532 [2024-10-13 17:46:37.273824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:47.532 [2024-10-13 17:46:37.273829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:47.532 [2024-10-13 17:46:37.273835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:47.532 [2024-10-13 17:46:37.273840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:47.532 [2024-10-13 17:46:37.273845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:47.532 [2024-10-13 17:46:37.273850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:47.532 [2024-10-13 17:46:37.273855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:47.533 [2024-10-13 17:46:37.273860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:47.533 [2024-10-13 17:46:37.273865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:47.533 [2024-10-13 17:46:37.273870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:47.533 [2024-10-13 17:46:37.273876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:47.533 [2024-10-13 17:46:37.273881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:47.533 [2024-10-13 17:46:37.273886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:47.533 [2024-10-13 17:46:37.273891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:47.533 [2024-10-13 17:46:37.273897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:47.533 [2024-10-13 17:46:37.273902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:47.533 [2024-10-13 17:46:37.273908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:47.533 [2024-10-13 17:46:37.273913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:47.533 [2024-10-13 17:46:37.273918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:47.533 [2024-10-13 17:46:37.273923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:47.533 [2024-10-13 17:46:37.273929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:47.533 [2024-10-13 17:46:37.273934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:47.533 [2024-10-13 17:46:37.273939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:47.533 [2024-10-13 17:46:37.273945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:47.533 [2024-10-13 17:46:37.273950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:47.533 [2024-10-13 17:46:37.273955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:47.533 [2024-10-13 17:46:37.273961] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:47.533 [2024-10-13 17:46:37.273967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:47.533 [2024-10-13 17:46:37.273974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:47.533 [2024-10-13 17:46:37.273979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:47.533 [2024-10-13 17:46:37.273987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:47.533 [2024-10-13 17:46:37.273992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:47.533 [2024-10-13 17:46:37.273997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:47.533 [2024-10-13 17:46:37.274003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:47.533 [2024-10-13 17:46:37.274008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:47.533 [2024-10-13 17:46:37.274013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:47.533 [2024-10-13 17:46:37.274020] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:47.533 [2024-10-13 17:46:37.274029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:47.533 [2024-10-13 17:46:37.274036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:47.533 [2024-10-13 17:46:37.274041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:47.533 [2024-10-13 17:46:37.274047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:47.533 [2024-10-13 17:46:37.274052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:47.533 [2024-10-13 17:46:37.274057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:47.533 [2024-10-13 17:46:37.274063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:47.533 [2024-10-13 17:46:37.274069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:47.533 [2024-10-13 17:46:37.274074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:47.533 [2024-10-13 17:46:37.274080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:47.533 [2024-10-13 17:46:37.274086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:47.533 [2024-10-13 17:46:37.274091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:47.533 [2024-10-13 17:46:37.274097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:47.533 [2024-10-13 17:46:37.274102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:47.533 [2024-10-13 17:46:37.274108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:47.533 [2024-10-13 17:46:37.274114] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:47.533 [2024-10-13 17:46:37.274121] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:47.533 [2024-10-13 17:46:37.274127] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:47.533 [2024-10-13 17:46:37.274133] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:47.533 [2024-10-13 17:46:37.274139] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:47.533 [2024-10-13 17:46:37.274144] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:47.533 [2024-10-13 17:46:37.274150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.533 [2024-10-13 17:46:37.274156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:47.533 [2024-10-13 17:46:37.274165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.484 ms 00:16:47.533 [2024-10-13 17:46:37.274173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.533 [2024-10-13 17:46:37.298549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.533 [2024-10-13 17:46:37.298588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:47.533 [2024-10-13 17:46:37.298598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.323 ms 00:16:47.533 [2024-10-13 17:46:37.298604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.533 [2024-10-13 17:46:37.298702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.533 [2024-10-13 17:46:37.298711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:47.533 [2024-10-13 17:46:37.298718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:47.533 [2024-10-13 17:46:37.298727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.533 [2024-10-13 17:46:37.342658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.533 [2024-10-13 17:46:37.342689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:47.533 [2024-10-13 17:46:37.342699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.914 ms 00:16:47.533 [2024-10-13 17:46:37.342705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.533 [2024-10-13 17:46:37.342786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.533 [2024-10-13 17:46:37.342795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:47.533 [2024-10-13 17:46:37.342803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:47.533 [2024-10-13 17:46:37.342809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.533 [2024-10-13 17:46:37.343192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.533 [2024-10-13 17:46:37.343204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:47.533 [2024-10-13 17:46:37.343212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:16:47.533 [2024-10-13 17:46:37.343219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.533 [2024-10-13 17:46:37.343334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.533 [2024-10-13 17:46:37.343343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:47.533 [2024-10-13 17:46:37.343350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:16:47.533 [2024-10-13 17:46:37.343356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.794 [2024-10-13 17:46:37.355697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.794 [2024-10-13 17:46:37.355723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:47.794 [2024-10-13 17:46:37.355732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.325 ms 00:16:47.794 [2024-10-13 17:46:37.355738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.795 [2024-10-13 17:46:37.366549] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:47.795 [2024-10-13 17:46:37.366583] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:47.795 [2024-10-13 17:46:37.366593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.795 [2024-10-13 17:46:37.366600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:47.795 [2024-10-13 17:46:37.366608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.771 ms 00:16:47.795 [2024-10-13 17:46:37.366614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.795 [2024-10-13 17:46:37.385293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.795 [2024-10-13 17:46:37.385408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:47.795 [2024-10-13 17:46:37.385429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.621 ms 00:16:47.795 [2024-10-13 17:46:37.385436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.795 [2024-10-13 17:46:37.394592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.795 [2024-10-13 17:46:37.394673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:47.795 [2024-10-13 17:46:37.394713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.101 ms 00:16:47.795 [2024-10-13 17:46:37.394730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.795 [2024-10-13 17:46:37.403457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.795 [2024-10-13 17:46:37.403542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:47.795 [2024-10-13 17:46:37.403600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.679 ms 00:16:47.795 [2024-10-13 17:46:37.403619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.795 [2024-10-13 17:46:37.404097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.795 [2024-10-13 17:46:37.404174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:47.795 [2024-10-13 17:46:37.404484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:16:47.795 [2024-10-13 17:46:37.404518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.795 [2024-10-13 17:46:37.452569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.795 [2024-10-13 17:46:37.452717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:47.795 [2024-10-13 17:46:37.452768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.993 ms 00:16:47.795 [2024-10-13 17:46:37.452786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.795 [2024-10-13 17:46:37.460957] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:47.795 [2024-10-13 17:46:37.475483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.795 [2024-10-13 17:46:37.475611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:47.795 [2024-10-13 17:46:37.475656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.575 ms 00:16:47.795 [2024-10-13 17:46:37.475674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.795 [2024-10-13 17:46:37.475765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.795 [2024-10-13 17:46:37.475786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:47.795 [2024-10-13 17:46:37.475862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:47.795 [2024-10-13 17:46:37.475880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.795 [2024-10-13 17:46:37.475940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.795 [2024-10-13 17:46:37.475958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:47.795 [2024-10-13 17:46:37.475999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:16:47.795 [2024-10-13 17:46:37.476099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.795 [2024-10-13 17:46:37.476145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.795 [2024-10-13 17:46:37.476200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:47.795 [2024-10-13 17:46:37.476224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:47.795 [2024-10-13 17:46:37.476446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.795 [2024-10-13 17:46:37.476524] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:47.795 [2024-10-13 17:46:37.476548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.795 [2024-10-13 17:46:37.476625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:47.795 [2024-10-13 17:46:37.476646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:16:47.795 [2024-10-13 17:46:37.476662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.795 [2024-10-13 17:46:37.495314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.795 [2024-10-13 17:46:37.495413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:47.795 [2024-10-13 17:46:37.495460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.597 ms 00:16:47.795 [2024-10-13 17:46:37.495477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.795 [2024-10-13 17:46:37.495567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.795 [2024-10-13 17:46:37.495758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:47.795 [2024-10-13 17:46:37.495793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:16:47.795 [2024-10-13 17:46:37.495810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.795 [2024-10-13 17:46:37.496584] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:47.795 [2024-10-13 17:46:37.498913] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 247.322 ms, result 0 00:16:47.795 [2024-10-13 17:46:37.499902] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:47.795 [2024-10-13 17:46:37.510666] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:48.750  [2024-10-13T17:46:39.947Z] Copying: 32/256 [MB] (32 MBps) [2024-10-13T17:46:40.516Z] Copying: 54/256 [MB] (22 MBps) [2024-10-13T17:46:41.902Z] Copying: 88/256 [MB] (33 MBps) [2024-10-13T17:46:42.845Z] Copying: 126/256 [MB] (38 MBps) [2024-10-13T17:46:43.789Z] Copying: 148/256 [MB] (21 MBps) [2024-10-13T17:46:44.732Z] Copying: 168/256 [MB] (20 MBps) [2024-10-13T17:46:45.675Z] Copying: 185/256 [MB] (17 MBps) [2024-10-13T17:46:46.663Z] Copying: 203/256 [MB] (17 MBps) [2024-10-13T17:46:47.629Z] Copying: 215/256 [MB] (12 MBps) [2024-10-13T17:46:48.572Z] Copying: 229/256 [MB] (14 MBps) [2024-10-13T17:46:49.516Z] Copying: 245/256 [MB] (15 MBps) [2024-10-13T17:46:49.516Z] Copying: 256/256 [MB] (average 21 MBps)[2024-10-13 17:46:49.478996] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:59.702 [2024-10-13 17:46:49.490188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.702 [2024-10-13 17:46:49.490396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:59.703 [2024-10-13 17:46:49.490424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:59.703 [2024-10-13 17:46:49.490434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.703 [2024-10-13 17:46:49.490471] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:59.703 [2024-10-13 17:46:49.493836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.703 [2024-10-13 17:46:49.493879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:59.703 [2024-10-13 17:46:49.493904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.346 ms 00:16:59.703 [2024-10-13 17:46:49.493913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.703 [2024-10-13 17:46:49.497154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.703 [2024-10-13 17:46:49.497330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:59.703 [2024-10-13 17:46:49.497352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.208 ms 00:16:59.703 [2024-10-13 17:46:49.497362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.703 [2024-10-13 17:46:49.506125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.703 [2024-10-13 17:46:49.506174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:59.703 [2024-10-13 17:46:49.506187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.736 ms 00:16:59.703 [2024-10-13 17:46:49.506196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.703 [2024-10-13 17:46:49.513622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.703 [2024-10-13 17:46:49.513782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:59.703 [2024-10-13 17:46:49.513856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.366 ms 00:16:59.703 [2024-10-13 17:46:49.513881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.965 [2024-10-13 17:46:49.540957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.965 [2024-10-13 17:46:49.541152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:59.965 [2024-10-13 17:46:49.541173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.980 ms 00:16:59.965 [2024-10-13 17:46:49.541181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.965 [2024-10-13 17:46:49.558942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.965 [2024-10-13 17:46:49.558993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:59.965 [2024-10-13 17:46:49.559008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.673 ms 00:16:59.965 [2024-10-13 17:46:49.559016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.965 [2024-10-13 17:46:49.559218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.965 [2024-10-13 17:46:49.559234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:59.965 [2024-10-13 17:46:49.559245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:16:59.965 [2024-10-13 17:46:49.559253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.965 [2024-10-13 17:46:49.582589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.965 [2024-10-13 17:46:49.582760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:59.965 [2024-10-13 17:46:49.582779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.318 ms 00:16:59.965 [2024-10-13 17:46:49.582786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.965 [2024-10-13 17:46:49.602504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.965 [2024-10-13 17:46:49.602659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:59.966 [2024-10-13 17:46:49.602675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.679 ms 00:16:59.966 [2024-10-13 17:46:49.602680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.966 [2024-10-13 17:46:49.621727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.966 [2024-10-13 17:46:49.621852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:59.966 [2024-10-13 17:46:49.621867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.013 ms 00:16:59.966 [2024-10-13 17:46:49.621873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.966 [2024-10-13 17:46:49.640299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.966 [2024-10-13 17:46:49.640333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:59.966 [2024-10-13 17:46:49.640341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.357 ms 00:16:59.966 [2024-10-13 17:46:49.640347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.966 [2024-10-13 17:46:49.640382] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:59.966 [2024-10-13 17:46:49.640396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:59.966 [2024-10-13 17:46:49.640989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:59.967 [2024-10-13 17:46:49.640995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:59.967 [2024-10-13 17:46:49.641001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:59.967 [2024-10-13 17:46:49.641006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:59.967 [2024-10-13 17:46:49.641012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:59.967 [2024-10-13 17:46:49.641017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:59.967 [2024-10-13 17:46:49.641024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:59.967 [2024-10-13 17:46:49.641030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:59.967 [2024-10-13 17:46:49.641036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:59.967 [2024-10-13 17:46:49.641041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:59.967 [2024-10-13 17:46:49.641048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:59.967 [2024-10-13 17:46:49.641056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:59.967 [2024-10-13 17:46:49.641071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:59.967 [2024-10-13 17:46:49.641077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:59.967 [2024-10-13 17:46:49.641083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:59.967 [2024-10-13 17:46:49.641089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:59.967 [2024-10-13 17:46:49.641103] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:59.967 [2024-10-13 17:46:49.641112] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 74767a50-ba29-46ab-9b2a-5c33f6d9a290 00:16:59.967 [2024-10-13 17:46:49.641119] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:59.967 [2024-10-13 17:46:49.641125] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:59.967 [2024-10-13 17:46:49.641131] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:59.967 [2024-10-13 17:46:49.641137] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:59.967 [2024-10-13 17:46:49.641143] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:59.967 [2024-10-13 17:46:49.641150] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:59.967 [2024-10-13 17:46:49.641157] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:59.967 [2024-10-13 17:46:49.641162] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:59.967 [2024-10-13 17:46:49.641167] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:59.967 [2024-10-13 17:46:49.641173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.967 [2024-10-13 17:46:49.641179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:59.967 [2024-10-13 17:46:49.641185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.792 ms 00:16:59.967 [2024-10-13 17:46:49.641191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.967 [2024-10-13 17:46:49.652272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.967 [2024-10-13 17:46:49.652382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:59.967 [2024-10-13 17:46:49.652394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.063 ms 00:16:59.967 [2024-10-13 17:46:49.652401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.967 [2024-10-13 17:46:49.652733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.967 [2024-10-13 17:46:49.652748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:59.967 [2024-10-13 17:46:49.652755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:16:59.967 [2024-10-13 17:46:49.652766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.967 [2024-10-13 17:46:49.683198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:59.967 [2024-10-13 17:46:49.683297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:59.967 [2024-10-13 17:46:49.683339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:59.967 [2024-10-13 17:46:49.683357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.967 [2024-10-13 17:46:49.683452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:59.967 [2024-10-13 17:46:49.683472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:59.967 [2024-10-13 17:46:49.683487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:59.967 [2024-10-13 17:46:49.683505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.967 [2024-10-13 17:46:49.683550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:59.967 [2024-10-13 17:46:49.683600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:59.967 [2024-10-13 17:46:49.683666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:59.967 [2024-10-13 17:46:49.683684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.967 [2024-10-13 17:46:49.683711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:59.967 [2024-10-13 17:46:49.683729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:59.967 [2024-10-13 17:46:49.683744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:59.967 [2024-10-13 17:46:49.683759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.967 [2024-10-13 17:46:49.746993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:59.967 [2024-10-13 17:46:49.747121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:59.967 [2024-10-13 17:46:49.747164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:59.967 [2024-10-13 17:46:49.747182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.229 [2024-10-13 17:46:49.798484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.229 [2024-10-13 17:46:49.798621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:00.229 [2024-10-13 17:46:49.798665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.229 [2024-10-13 17:46:49.798688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.229 [2024-10-13 17:46:49.798764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.229 [2024-10-13 17:46:49.798783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:00.229 [2024-10-13 17:46:49.798799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.229 [2024-10-13 17:46:49.798814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.229 [2024-10-13 17:46:49.798849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.229 [2024-10-13 17:46:49.798865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:00.229 [2024-10-13 17:46:49.798881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.229 [2024-10-13 17:46:49.798935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.229 [2024-10-13 17:46:49.799037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.229 [2024-10-13 17:46:49.799056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:00.229 [2024-10-13 17:46:49.799073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.229 [2024-10-13 17:46:49.799087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.229 [2024-10-13 17:46:49.799122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.229 [2024-10-13 17:46:49.799221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:00.229 [2024-10-13 17:46:49.799237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.229 [2024-10-13 17:46:49.799251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.229 [2024-10-13 17:46:49.799298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.229 [2024-10-13 17:46:49.799316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:00.229 [2024-10-13 17:46:49.799368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.229 [2024-10-13 17:46:49.799386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.229 [2024-10-13 17:46:49.799441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.229 [2024-10-13 17:46:49.799507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:00.229 [2024-10-13 17:46:49.799573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.229 [2024-10-13 17:46:49.799589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.229 [2024-10-13 17:46:49.799728] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 309.542 ms, result 0 00:17:01.172 00:17:01.172 00:17:01.172 17:46:50 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:01.172 17:46:50 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=74427 00:17:01.172 17:46:50 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 74427 00:17:01.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.172 17:46:50 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 74427 ']' 00:17:01.172 17:46:50 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.172 17:46:50 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:01.172 17:46:50 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.172 17:46:50 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:01.173 17:46:50 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:01.173 [2024-10-13 17:46:50.720420] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:17:01.173 [2024-10-13 17:46:50.720698] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74427 ] 00:17:01.173 [2024-10-13 17:46:50.858135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.173 [2024-10-13 17:46:50.954212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.744 17:46:51 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:01.744 17:46:51 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:01.744 17:46:51 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:02.005 [2024-10-13 17:46:51.754363] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:02.005 [2024-10-13 17:46:51.754419] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:02.268 [2024-10-13 17:46:51.929869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.268 [2024-10-13 17:46:51.930049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:02.268 [2024-10-13 17:46:51.930073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:02.268 [2024-10-13 17:46:51.930083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.268 [2024-10-13 17:46:51.933238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.268 [2024-10-13 17:46:51.933383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:02.268 [2024-10-13 17:46:51.933405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.132 ms 00:17:02.268 [2024-10-13 17:46:51.933414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.268 [2024-10-13 17:46:51.933624] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:02.268 [2024-10-13 17:46:51.934323] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:02.268 [2024-10-13 17:46:51.934356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.268 [2024-10-13 17:46:51.934365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:02.268 [2024-10-13 17:46:51.934376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:17:02.268 [2024-10-13 17:46:51.934383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.268 [2024-10-13 17:46:51.936056] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:02.268 [2024-10-13 17:46:51.949835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.268 [2024-10-13 17:46:51.949876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:02.268 [2024-10-13 17:46:51.949889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.785 ms 00:17:02.268 [2024-10-13 17:46:51.949899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.268 [2024-10-13 17:46:51.949986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.268 [2024-10-13 17:46:51.949999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:02.268 [2024-10-13 17:46:51.950009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:02.268 [2024-10-13 17:46:51.950018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.268 [2024-10-13 17:46:51.957511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.268 [2024-10-13 17:46:51.957549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:02.268 [2024-10-13 17:46:51.957574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.444 ms 00:17:02.268 [2024-10-13 17:46:51.957585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.268 [2024-10-13 17:46:51.957688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.268 [2024-10-13 17:46:51.957702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:02.268 [2024-10-13 17:46:51.957710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:02.268 [2024-10-13 17:46:51.957720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.268 [2024-10-13 17:46:51.957744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.268 [2024-10-13 17:46:51.957758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:02.268 [2024-10-13 17:46:51.957765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:02.268 [2024-10-13 17:46:51.957774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.268 [2024-10-13 17:46:51.957798] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:02.268 [2024-10-13 17:46:51.961654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.268 [2024-10-13 17:46:51.961682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:02.268 [2024-10-13 17:46:51.961693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.860 ms 00:17:02.268 [2024-10-13 17:46:51.961701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.268 [2024-10-13 17:46:51.961765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.268 [2024-10-13 17:46:51.961774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:02.268 [2024-10-13 17:46:51.961785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:02.268 [2024-10-13 17:46:51.961792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.268 [2024-10-13 17:46:51.961814] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:02.268 [2024-10-13 17:46:51.961836] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:02.268 [2024-10-13 17:46:51.961879] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:02.268 [2024-10-13 17:46:51.961894] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:02.268 [2024-10-13 17:46:51.962005] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:02.268 [2024-10-13 17:46:51.962016] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:02.268 [2024-10-13 17:46:51.962029] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:02.268 [2024-10-13 17:46:51.962039] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:02.268 [2024-10-13 17:46:51.962052] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:02.268 [2024-10-13 17:46:51.962061] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:02.268 [2024-10-13 17:46:51.962070] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:02.268 [2024-10-13 17:46:51.962077] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:02.268 [2024-10-13 17:46:51.962089] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:02.268 [2024-10-13 17:46:51.962097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.268 [2024-10-13 17:46:51.962107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:02.268 [2024-10-13 17:46:51.962114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:17:02.269 [2024-10-13 17:46:51.962123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.269 [2024-10-13 17:46:51.962210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.269 [2024-10-13 17:46:51.962220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:02.269 [2024-10-13 17:46:51.962229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:02.269 [2024-10-13 17:46:51.962237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.269 [2024-10-13 17:46:51.962337] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:02.269 [2024-10-13 17:46:51.962348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:02.269 [2024-10-13 17:46:51.962357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:02.269 [2024-10-13 17:46:51.962367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:02.269 [2024-10-13 17:46:51.962374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:02.269 [2024-10-13 17:46:51.962383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:02.269 [2024-10-13 17:46:51.962390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:02.269 [2024-10-13 17:46:51.962402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:02.269 [2024-10-13 17:46:51.962410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:02.269 [2024-10-13 17:46:51.962418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:02.269 [2024-10-13 17:46:51.962425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:02.269 [2024-10-13 17:46:51.962433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:02.269 [2024-10-13 17:46:51.962439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:02.269 [2024-10-13 17:46:51.962448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:02.269 [2024-10-13 17:46:51.962454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:02.269 [2024-10-13 17:46:51.962469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:02.269 [2024-10-13 17:46:51.962476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:02.269 [2024-10-13 17:46:51.962484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:02.269 [2024-10-13 17:46:51.962491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:02.269 [2024-10-13 17:46:51.962500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:02.269 [2024-10-13 17:46:51.962513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:02.269 [2024-10-13 17:46:51.962521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:02.269 [2024-10-13 17:46:51.962527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:02.269 [2024-10-13 17:46:51.962538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:02.269 [2024-10-13 17:46:51.962544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:02.269 [2024-10-13 17:46:51.962553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:02.269 [2024-10-13 17:46:51.962573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:02.269 [2024-10-13 17:46:51.962582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:02.269 [2024-10-13 17:46:51.962588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:02.269 [2024-10-13 17:46:51.962596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:02.269 [2024-10-13 17:46:51.962603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:02.269 [2024-10-13 17:46:51.962613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:02.269 [2024-10-13 17:46:51.962620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:02.269 [2024-10-13 17:46:51.962628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:02.269 [2024-10-13 17:46:51.962635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:02.269 [2024-10-13 17:46:51.962643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:02.269 [2024-10-13 17:46:51.962649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:02.269 [2024-10-13 17:46:51.962658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:02.269 [2024-10-13 17:46:51.962664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:02.269 [2024-10-13 17:46:51.962674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:02.269 [2024-10-13 17:46:51.962680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:02.269 [2024-10-13 17:46:51.962688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:02.269 [2024-10-13 17:46:51.962696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:02.269 [2024-10-13 17:46:51.962704] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:02.269 [2024-10-13 17:46:51.962712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:02.269 [2024-10-13 17:46:51.962721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:02.269 [2024-10-13 17:46:51.962730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:02.269 [2024-10-13 17:46:51.962740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:02.269 [2024-10-13 17:46:51.962748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:02.269 [2024-10-13 17:46:51.962757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:02.269 [2024-10-13 17:46:51.962765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:02.269 [2024-10-13 17:46:51.962772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:02.269 [2024-10-13 17:46:51.962779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:02.269 [2024-10-13 17:46:51.962789] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:02.269 [2024-10-13 17:46:51.962798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:02.269 [2024-10-13 17:46:51.962811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:02.269 [2024-10-13 17:46:51.962818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:02.269 [2024-10-13 17:46:51.962827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:02.269 [2024-10-13 17:46:51.962834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:02.269 [2024-10-13 17:46:51.962843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:02.269 [2024-10-13 17:46:51.962850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:02.269 [2024-10-13 17:46:51.962858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:02.269 [2024-10-13 17:46:51.962865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:02.269 [2024-10-13 17:46:51.962874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:02.269 [2024-10-13 17:46:51.962881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:02.269 [2024-10-13 17:46:51.962889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:02.269 [2024-10-13 17:46:51.962897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:02.269 [2024-10-13 17:46:51.962905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:02.269 [2024-10-13 17:46:51.962912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:02.269 [2024-10-13 17:46:51.962921] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:02.269 [2024-10-13 17:46:51.962930] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:02.269 [2024-10-13 17:46:51.962942] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:02.269 [2024-10-13 17:46:51.962948] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:02.269 [2024-10-13 17:46:51.962957] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:02.269 [2024-10-13 17:46:51.962965] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:02.269 [2024-10-13 17:46:51.962974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.269 [2024-10-13 17:46:51.962982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:02.269 [2024-10-13 17:46:51.962991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:17:02.269 [2024-10-13 17:46:51.962998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.269 [2024-10-13 17:46:51.994013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.269 [2024-10-13 17:46:51.994147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:02.269 [2024-10-13 17:46:51.994208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.954 ms 00:17:02.269 [2024-10-13 17:46:51.994232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.269 [2024-10-13 17:46:51.994377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.269 [2024-10-13 17:46:51.994406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:02.269 [2024-10-13 17:46:51.994476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:02.269 [2024-10-13 17:46:51.994499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.269 [2024-10-13 17:46:52.028836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.269 [2024-10-13 17:46:52.028971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:02.269 [2024-10-13 17:46:52.029031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.297 ms 00:17:02.269 [2024-10-13 17:46:52.029058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.269 [2024-10-13 17:46:52.029152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.269 [2024-10-13 17:46:52.029178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:02.269 [2024-10-13 17:46:52.029201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:02.269 [2024-10-13 17:46:52.029220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.269 [2024-10-13 17:46:52.029817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.269 [2024-10-13 17:46:52.029923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:02.269 [2024-10-13 17:46:52.029975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.510 ms 00:17:02.269 [2024-10-13 17:46:52.029999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.269 [2024-10-13 17:46:52.030166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.270 [2024-10-13 17:46:52.030220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:02.270 [2024-10-13 17:46:52.030247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:17:02.270 [2024-10-13 17:46:52.030266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.270 [2024-10-13 17:46:52.048341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.270 [2024-10-13 17:46:52.048467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:02.270 [2024-10-13 17:46:52.048523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.003 ms 00:17:02.270 [2024-10-13 17:46:52.048552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.270 [2024-10-13 17:46:52.063022] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:02.270 [2024-10-13 17:46:52.063177] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:02.270 [2024-10-13 17:46:52.063243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.270 [2024-10-13 17:46:52.063265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:02.270 [2024-10-13 17:46:52.063289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.546 ms 00:17:02.270 [2024-10-13 17:46:52.063308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.531 [2024-10-13 17:46:52.089055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.531 [2024-10-13 17:46:52.089221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:02.531 [2024-10-13 17:46:52.089289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.654 ms 00:17:02.531 [2024-10-13 17:46:52.089313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.531 [2024-10-13 17:46:52.102734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.531 [2024-10-13 17:46:52.102908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:02.531 [2024-10-13 17:46:52.102980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.019 ms 00:17:02.531 [2024-10-13 17:46:52.103005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.531 [2024-10-13 17:46:52.115739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.531 [2024-10-13 17:46:52.115896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:02.531 [2024-10-13 17:46:52.115960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.563 ms 00:17:02.531 [2024-10-13 17:46:52.115982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.531 [2024-10-13 17:46:52.116725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.531 [2024-10-13 17:46:52.116851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:02.531 [2024-10-13 17:46:52.117010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.577 ms 00:17:02.531 [2024-10-13 17:46:52.117092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.531 [2024-10-13 17:46:52.199822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.531 [2024-10-13 17:46:52.200048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:02.532 [2024-10-13 17:46:52.200125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.675 ms 00:17:02.532 [2024-10-13 17:46:52.200165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.532 [2024-10-13 17:46:52.212265] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:02.532 [2024-10-13 17:46:52.237213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.532 [2024-10-13 17:46:52.237396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:02.532 [2024-10-13 17:46:52.237457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.939 ms 00:17:02.532 [2024-10-13 17:46:52.237485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.532 [2024-10-13 17:46:52.237644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.532 [2024-10-13 17:46:52.237678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:02.532 [2024-10-13 17:46:52.237701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:02.532 [2024-10-13 17:46:52.237724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.532 [2024-10-13 17:46:52.237808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.532 [2024-10-13 17:46:52.237833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:02.532 [2024-10-13 17:46:52.237856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:17:02.532 [2024-10-13 17:46:52.237938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.532 [2024-10-13 17:46:52.238031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.532 [2024-10-13 17:46:52.238064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:02.532 [2024-10-13 17:46:52.238111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:02.532 [2024-10-13 17:46:52.238140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.532 [2024-10-13 17:46:52.238199] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:02.532 [2024-10-13 17:46:52.238230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.532 [2024-10-13 17:46:52.238250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:02.532 [2024-10-13 17:46:52.238274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:02.532 [2024-10-13 17:46:52.238298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.532 [2024-10-13 17:46:52.265034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.532 [2024-10-13 17:46:52.265195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:02.532 [2024-10-13 17:46:52.265221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.692 ms 00:17:02.532 [2024-10-13 17:46:52.265231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.532 [2024-10-13 17:46:52.265346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.532 [2024-10-13 17:46:52.265359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:02.532 [2024-10-13 17:46:52.265372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:02.532 [2024-10-13 17:46:52.265380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.532 [2024-10-13 17:46:52.266777] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:02.532 [2024-10-13 17:46:52.270338] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 336.461 ms, result 0 00:17:02.532 [2024-10-13 17:46:52.272773] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:02.532 Some configs were skipped because the RPC state that can call them passed over. 00:17:02.532 17:46:52 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:02.792 [2024-10-13 17:46:52.521165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.792 [2024-10-13 17:46:52.521359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:02.792 [2024-10-13 17:46:52.521421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.120 ms 00:17:02.792 [2024-10-13 17:46:52.521449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.792 [2024-10-13 17:46:52.521510] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.467 ms, result 0 00:17:02.792 true 00:17:02.792 17:46:52 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:03.054 [2024-10-13 17:46:52.737259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.054 [2024-10-13 17:46:52.737445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:03.054 [2024-10-13 17:46:52.737512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.944 ms 00:17:03.054 [2024-10-13 17:46:52.737537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.054 [2024-10-13 17:46:52.737627] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.317 ms, result 0 00:17:03.054 true 00:17:03.054 17:46:52 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 74427 00:17:03.054 17:46:52 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 74427 ']' 00:17:03.054 17:46:52 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 74427 00:17:03.054 17:46:52 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:03.054 17:46:52 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:03.054 17:46:52 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74427 00:17:03.054 killing process with pid 74427 00:17:03.054 17:46:52 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:03.054 17:46:52 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:03.054 17:46:52 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74427' 00:17:03.054 17:46:52 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 74427 00:17:03.054 17:46:52 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 74427 00:17:03.999 [2024-10-13 17:46:53.515887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.999 [2024-10-13 17:46:53.515945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:03.999 [2024-10-13 17:46:53.515957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:03.999 [2024-10-13 17:46:53.515965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.999 [2024-10-13 17:46:53.515983] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:03.999 [2024-10-13 17:46:53.518123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.999 [2024-10-13 17:46:53.518153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:03.999 [2024-10-13 17:46:53.518167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.124 ms 00:17:03.999 [2024-10-13 17:46:53.518174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.999 [2024-10-13 17:46:53.518386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.999 [2024-10-13 17:46:53.518393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:03.999 [2024-10-13 17:46:53.518402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:17:03.999 [2024-10-13 17:46:53.518408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.999 [2024-10-13 17:46:53.521594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.999 [2024-10-13 17:46:53.521619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:03.999 [2024-10-13 17:46:53.521629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.171 ms 00:17:03.999 [2024-10-13 17:46:53.521635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.999 [2024-10-13 17:46:53.526865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.999 [2024-10-13 17:46:53.526991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:03.999 [2024-10-13 17:46:53.527010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.198 ms 00:17:03.999 [2024-10-13 17:46:53.527017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.999 [2024-10-13 17:46:53.534571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.999 [2024-10-13 17:46:53.534595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:03.999 [2024-10-13 17:46:53.534606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.508 ms 00:17:03.999 [2024-10-13 17:46:53.534618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.999 [2024-10-13 17:46:53.540975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.999 [2024-10-13 17:46:53.541001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:03.999 [2024-10-13 17:46:53.541011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.326 ms 00:17:03.999 [2024-10-13 17:46:53.541020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.999 [2024-10-13 17:46:53.541124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.999 [2024-10-13 17:46:53.541132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:03.999 [2024-10-13 17:46:53.541141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:03.999 [2024-10-13 17:46:53.541146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.999 [2024-10-13 17:46:53.549233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.999 [2024-10-13 17:46:53.549255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:03.999 [2024-10-13 17:46:53.549265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.071 ms 00:17:03.999 [2024-10-13 17:46:53.549271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.999 [2024-10-13 17:46:53.556776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.999 [2024-10-13 17:46:53.556799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:03.999 [2024-10-13 17:46:53.556809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.476 ms 00:17:03.999 [2024-10-13 17:46:53.556814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.999 [2024-10-13 17:46:53.563961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.999 [2024-10-13 17:46:53.563984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:03.999 [2024-10-13 17:46:53.563992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.117 ms 00:17:03.999 [2024-10-13 17:46:53.563997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.999 [2024-10-13 17:46:53.571174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.999 [2024-10-13 17:46:53.571197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:03.999 [2024-10-13 17:46:53.571206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.126 ms 00:17:03.999 [2024-10-13 17:46:53.571212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.999 [2024-10-13 17:46:53.571246] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:03.999 [2024-10-13 17:46:53.571258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:03.999 [2024-10-13 17:46:53.571267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:03.999 [2024-10-13 17:46:53.571273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:03.999 [2024-10-13 17:46:53.571281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:03.999 [2024-10-13 17:46:53.571287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:03.999 [2024-10-13 17:46:53.571296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:03.999 [2024-10-13 17:46:53.571301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:03.999 [2024-10-13 17:46:53.571309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:03.999 [2024-10-13 17:46:53.571314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:03.999 [2024-10-13 17:46:53.571321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:03.999 [2024-10-13 17:46:53.571327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:03.999 [2024-10-13 17:46:53.571334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:03.999 [2024-10-13 17:46:53.571340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:03.999 [2024-10-13 17:46:53.571346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:03.999 [2024-10-13 17:46:53.571352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:03.999 [2024-10-13 17:46:53.571359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:03.999 [2024-10-13 17:46:53.571364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:03.999 [2024-10-13 17:46:53.571384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:03.999 [2024-10-13 17:46:53.571389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:03.999 [2024-10-13 17:46:53.571396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:04.000 [2024-10-13 17:46:53.571957] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:04.000 [2024-10-13 17:46:53.571966] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 74767a50-ba29-46ab-9b2a-5c33f6d9a290 00:17:04.000 [2024-10-13 17:46:53.571977] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:04.000 [2024-10-13 17:46:53.571986] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:04.000 [2024-10-13 17:46:53.571993] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:04.000 [2024-10-13 17:46:53.572001] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:04.000 [2024-10-13 17:46:53.572006] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:04.000 [2024-10-13 17:46:53.572014] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:04.000 [2024-10-13 17:46:53.572019] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:04.000 [2024-10-13 17:46:53.572025] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:04.000 [2024-10-13 17:46:53.572030] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:04.000 [2024-10-13 17:46:53.572037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.000 [2024-10-13 17:46:53.572043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:04.000 [2024-10-13 17:46:53.572051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.793 ms 00:17:04.000 [2024-10-13 17:46:53.572057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.000 [2024-10-13 17:46:53.582079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.000 [2024-10-13 17:46:53.582181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:04.000 [2024-10-13 17:46:53.582198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.008 ms 00:17:04.001 [2024-10-13 17:46:53.582204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.001 [2024-10-13 17:46:53.582514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.001 [2024-10-13 17:46:53.582527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:04.001 [2024-10-13 17:46:53.582539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:17:04.001 [2024-10-13 17:46:53.582545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.001 [2024-10-13 17:46:53.619351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.001 [2024-10-13 17:46:53.619378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:04.001 [2024-10-13 17:46:53.619389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.001 [2024-10-13 17:46:53.619396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.001 [2024-10-13 17:46:53.619481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.001 [2024-10-13 17:46:53.619489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:04.001 [2024-10-13 17:46:53.619498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.001 [2024-10-13 17:46:53.619504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.001 [2024-10-13 17:46:53.619546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.001 [2024-10-13 17:46:53.619554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:04.001 [2024-10-13 17:46:53.619579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.001 [2024-10-13 17:46:53.619585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.001 [2024-10-13 17:46:53.619601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.001 [2024-10-13 17:46:53.619607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:04.001 [2024-10-13 17:46:53.619615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.001 [2024-10-13 17:46:53.619621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.001 [2024-10-13 17:46:53.683176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.001 [2024-10-13 17:46:53.683210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:04.001 [2024-10-13 17:46:53.683220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.001 [2024-10-13 17:46:53.683227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.001 [2024-10-13 17:46:53.734901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.001 [2024-10-13 17:46:53.734935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:04.001 [2024-10-13 17:46:53.734946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.001 [2024-10-13 17:46:53.734952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.001 [2024-10-13 17:46:53.735017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.001 [2024-10-13 17:46:53.735026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:04.001 [2024-10-13 17:46:53.735037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.001 [2024-10-13 17:46:53.735043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.001 [2024-10-13 17:46:53.735069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.001 [2024-10-13 17:46:53.735076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:04.001 [2024-10-13 17:46:53.735083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.001 [2024-10-13 17:46:53.735089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.001 [2024-10-13 17:46:53.735165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.001 [2024-10-13 17:46:53.735174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:04.001 [2024-10-13 17:46:53.735183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.001 [2024-10-13 17:46:53.735190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.001 [2024-10-13 17:46:53.735220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.001 [2024-10-13 17:46:53.735228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:04.001 [2024-10-13 17:46:53.735235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.001 [2024-10-13 17:46:53.735241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.001 [2024-10-13 17:46:53.735277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.001 [2024-10-13 17:46:53.735284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:04.001 [2024-10-13 17:46:53.735295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.001 [2024-10-13 17:46:53.735302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.001 [2024-10-13 17:46:53.735342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.001 [2024-10-13 17:46:53.735349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:04.001 [2024-10-13 17:46:53.735357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.001 [2024-10-13 17:46:53.735363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.001 [2024-10-13 17:46:53.735484] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 219.573 ms, result 0 00:17:04.944 17:46:54 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:04.944 17:46:54 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:04.944 [2024-10-13 17:46:54.578164] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:17:04.944 [2024-10-13 17:46:54.578345] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74480 ] 00:17:04.944 [2024-10-13 17:46:54.736218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.204 [2024-10-13 17:46:54.882344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.466 [2024-10-13 17:46:55.211974] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:05.466 [2024-10-13 17:46:55.212069] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:05.728 [2024-10-13 17:46:55.377576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.728 [2024-10-13 17:46:55.377641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:05.728 [2024-10-13 17:46:55.377658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:05.728 [2024-10-13 17:46:55.377667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.728 [2024-10-13 17:46:55.380973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.728 [2024-10-13 17:46:55.381023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:05.728 [2024-10-13 17:46:55.381035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.283 ms 00:17:05.728 [2024-10-13 17:46:55.381043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.728 [2024-10-13 17:46:55.381170] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:05.728 [2024-10-13 17:46:55.381937] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:05.728 [2024-10-13 17:46:55.381970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.728 [2024-10-13 17:46:55.381980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:05.728 [2024-10-13 17:46:55.381990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.808 ms 00:17:05.728 [2024-10-13 17:46:55.381998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.728 [2024-10-13 17:46:55.384316] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:05.728 [2024-10-13 17:46:55.399676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.728 [2024-10-13 17:46:55.399894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:05.728 [2024-10-13 17:46:55.399917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.361 ms 00:17:05.728 [2024-10-13 17:46:55.399933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.728 [2024-10-13 17:46:55.400526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.728 [2024-10-13 17:46:55.400684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:05.728 [2024-10-13 17:46:55.400722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:17:05.728 [2024-10-13 17:46:55.400748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.728 [2024-10-13 17:46:55.414382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.728 [2024-10-13 17:46:55.414442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:05.728 [2024-10-13 17:46:55.414455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.498 ms 00:17:05.728 [2024-10-13 17:46:55.414465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.728 [2024-10-13 17:46:55.414623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.728 [2024-10-13 17:46:55.414636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:05.728 [2024-10-13 17:46:55.414647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:17:05.728 [2024-10-13 17:46:55.414655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.728 [2024-10-13 17:46:55.414683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.728 [2024-10-13 17:46:55.414693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:05.728 [2024-10-13 17:46:55.414703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:05.728 [2024-10-13 17:46:55.414714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.728 [2024-10-13 17:46:55.414738] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:05.728 [2024-10-13 17:46:55.419436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.728 [2024-10-13 17:46:55.419481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:05.728 [2024-10-13 17:46:55.419493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.705 ms 00:17:05.728 [2024-10-13 17:46:55.419502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.728 [2024-10-13 17:46:55.419586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.728 [2024-10-13 17:46:55.419598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:05.728 [2024-10-13 17:46:55.419609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:05.728 [2024-10-13 17:46:55.419617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.728 [2024-10-13 17:46:55.419641] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:05.728 [2024-10-13 17:46:55.419668] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:05.728 [2024-10-13 17:46:55.419711] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:05.728 [2024-10-13 17:46:55.419728] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:05.728 [2024-10-13 17:46:55.419842] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:05.728 [2024-10-13 17:46:55.419854] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:05.728 [2024-10-13 17:46:55.419866] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:05.728 [2024-10-13 17:46:55.419878] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:05.728 [2024-10-13 17:46:55.419888] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:05.728 [2024-10-13 17:46:55.419897] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:05.728 [2024-10-13 17:46:55.419910] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:05.728 [2024-10-13 17:46:55.419918] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:05.728 [2024-10-13 17:46:55.419927] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:05.728 [2024-10-13 17:46:55.419937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.728 [2024-10-13 17:46:55.419946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:05.728 [2024-10-13 17:46:55.419954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:17:05.728 [2024-10-13 17:46:55.419961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.728 [2024-10-13 17:46:55.420051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.728 [2024-10-13 17:46:55.420060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:05.728 [2024-10-13 17:46:55.420068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:05.728 [2024-10-13 17:46:55.420079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.728 [2024-10-13 17:46:55.420198] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:05.728 [2024-10-13 17:46:55.420209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:05.728 [2024-10-13 17:46:55.420219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:05.728 [2024-10-13 17:46:55.420228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.728 [2024-10-13 17:46:55.420236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:05.728 [2024-10-13 17:46:55.420243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:05.728 [2024-10-13 17:46:55.420250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:05.728 [2024-10-13 17:46:55.420259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:05.728 [2024-10-13 17:46:55.420267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:05.728 [2024-10-13 17:46:55.420274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:05.728 [2024-10-13 17:46:55.420281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:05.728 [2024-10-13 17:46:55.420288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:05.728 [2024-10-13 17:46:55.420295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:05.728 [2024-10-13 17:46:55.420311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:05.728 [2024-10-13 17:46:55.420318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:05.728 [2024-10-13 17:46:55.420325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.728 [2024-10-13 17:46:55.420331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:05.728 [2024-10-13 17:46:55.420338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:05.728 [2024-10-13 17:46:55.420345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.728 [2024-10-13 17:46:55.420352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:05.728 [2024-10-13 17:46:55.420358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:05.728 [2024-10-13 17:46:55.420368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:05.728 [2024-10-13 17:46:55.420377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:05.728 [2024-10-13 17:46:55.420384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:05.728 [2024-10-13 17:46:55.420391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:05.728 [2024-10-13 17:46:55.420398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:05.728 [2024-10-13 17:46:55.420405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:05.728 [2024-10-13 17:46:55.420412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:05.728 [2024-10-13 17:46:55.420419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:05.728 [2024-10-13 17:46:55.420427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:05.728 [2024-10-13 17:46:55.420433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:05.728 [2024-10-13 17:46:55.420440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:05.728 [2024-10-13 17:46:55.420447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:05.728 [2024-10-13 17:46:55.420453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:05.728 [2024-10-13 17:46:55.420460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:05.729 [2024-10-13 17:46:55.420467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:05.729 [2024-10-13 17:46:55.420475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:05.729 [2024-10-13 17:46:55.420483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:05.729 [2024-10-13 17:46:55.420490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:05.729 [2024-10-13 17:46:55.420498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.729 [2024-10-13 17:46:55.420504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:05.729 [2024-10-13 17:46:55.420512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:05.729 [2024-10-13 17:46:55.420519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.729 [2024-10-13 17:46:55.420526] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:05.729 [2024-10-13 17:46:55.420533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:05.729 [2024-10-13 17:46:55.420542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:05.729 [2024-10-13 17:46:55.420550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.729 [2024-10-13 17:46:55.420584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:05.729 [2024-10-13 17:46:55.420592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:05.729 [2024-10-13 17:46:55.420601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:05.729 [2024-10-13 17:46:55.420609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:05.729 [2024-10-13 17:46:55.420616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:05.729 [2024-10-13 17:46:55.420624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:05.729 [2024-10-13 17:46:55.420635] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:05.729 [2024-10-13 17:46:55.420649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:05.729 [2024-10-13 17:46:55.420658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:05.729 [2024-10-13 17:46:55.420666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:05.729 [2024-10-13 17:46:55.420675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:05.729 [2024-10-13 17:46:55.420683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:05.729 [2024-10-13 17:46:55.420691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:05.729 [2024-10-13 17:46:55.420698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:05.729 [2024-10-13 17:46:55.420706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:05.729 [2024-10-13 17:46:55.420714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:05.729 [2024-10-13 17:46:55.420721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:05.729 [2024-10-13 17:46:55.420729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:05.729 [2024-10-13 17:46:55.420738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:05.729 [2024-10-13 17:46:55.420746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:05.729 [2024-10-13 17:46:55.420753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:05.729 [2024-10-13 17:46:55.420761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:05.729 [2024-10-13 17:46:55.420769] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:05.729 [2024-10-13 17:46:55.420778] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:05.729 [2024-10-13 17:46:55.420788] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:05.729 [2024-10-13 17:46:55.420795] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:05.729 [2024-10-13 17:46:55.420802] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:05.729 [2024-10-13 17:46:55.420809] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:05.729 [2024-10-13 17:46:55.420816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.729 [2024-10-13 17:46:55.420825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:05.729 [2024-10-13 17:46:55.420833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:17:05.729 [2024-10-13 17:46:55.420844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.729 [2024-10-13 17:46:55.459517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.729 [2024-10-13 17:46:55.459600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:05.729 [2024-10-13 17:46:55.459613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.612 ms 00:17:05.729 [2024-10-13 17:46:55.459622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.729 [2024-10-13 17:46:55.459775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.729 [2024-10-13 17:46:55.459787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:05.729 [2024-10-13 17:46:55.459796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:17:05.729 [2024-10-13 17:46:55.459811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.729 [2024-10-13 17:46:55.508414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.729 [2024-10-13 17:46:55.508696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:05.729 [2024-10-13 17:46:55.508722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.570 ms 00:17:05.729 [2024-10-13 17:46:55.508733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.729 [2024-10-13 17:46:55.508877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.729 [2024-10-13 17:46:55.508891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:05.729 [2024-10-13 17:46:55.508902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:05.729 [2024-10-13 17:46:55.508911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.729 [2024-10-13 17:46:55.509634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.729 [2024-10-13 17:46:55.509667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:05.729 [2024-10-13 17:46:55.509678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.696 ms 00:17:05.729 [2024-10-13 17:46:55.509687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.729 [2024-10-13 17:46:55.509868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.729 [2024-10-13 17:46:55.509889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:05.729 [2024-10-13 17:46:55.509899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:17:05.729 [2024-10-13 17:46:55.509907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.729 [2024-10-13 17:46:55.529062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.729 [2024-10-13 17:46:55.529108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:05.729 [2024-10-13 17:46:55.529121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.128 ms 00:17:05.729 [2024-10-13 17:46:55.529130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.991 [2024-10-13 17:46:55.544566] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:05.991 [2024-10-13 17:46:55.544617] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:05.991 [2024-10-13 17:46:55.544633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.991 [2024-10-13 17:46:55.544644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:05.991 [2024-10-13 17:46:55.544654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.377 ms 00:17:05.991 [2024-10-13 17:46:55.544663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.991 [2024-10-13 17:46:55.571093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.991 [2024-10-13 17:46:55.571157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:05.991 [2024-10-13 17:46:55.571170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.327 ms 00:17:05.991 [2024-10-13 17:46:55.571178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.991 [2024-10-13 17:46:55.584679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.991 [2024-10-13 17:46:55.584728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:05.991 [2024-10-13 17:46:55.584742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.395 ms 00:17:05.991 [2024-10-13 17:46:55.584750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.991 [2024-10-13 17:46:55.598001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.991 [2024-10-13 17:46:55.598046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:05.991 [2024-10-13 17:46:55.598059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.152 ms 00:17:05.991 [2024-10-13 17:46:55.598067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.991 [2024-10-13 17:46:55.598773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.991 [2024-10-13 17:46:55.598855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:05.991 [2024-10-13 17:46:55.598870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:17:05.991 [2024-10-13 17:46:55.598878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.991 [2024-10-13 17:46:55.672734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.991 [2024-10-13 17:46:55.672807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:05.991 [2024-10-13 17:46:55.672825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.823 ms 00:17:05.991 [2024-10-13 17:46:55.672834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.991 [2024-10-13 17:46:55.685405] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:05.991 [2024-10-13 17:46:55.710426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.991 [2024-10-13 17:46:55.710491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:05.991 [2024-10-13 17:46:55.710508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.476 ms 00:17:05.991 [2024-10-13 17:46:55.710519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.991 [2024-10-13 17:46:55.710670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.991 [2024-10-13 17:46:55.710687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:05.991 [2024-10-13 17:46:55.710698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:05.991 [2024-10-13 17:46:55.710706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.991 [2024-10-13 17:46:55.710776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.991 [2024-10-13 17:46:55.710786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:05.991 [2024-10-13 17:46:55.710795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:17:05.991 [2024-10-13 17:46:55.710804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.991 [2024-10-13 17:46:55.710831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.991 [2024-10-13 17:46:55.710846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:05.991 [2024-10-13 17:46:55.710860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:05.991 [2024-10-13 17:46:55.710869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.991 [2024-10-13 17:46:55.710915] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:05.991 [2024-10-13 17:46:55.710927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.991 [2024-10-13 17:46:55.710936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:05.991 [2024-10-13 17:46:55.710945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:05.991 [2024-10-13 17:46:55.710954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.991 [2024-10-13 17:46:55.738617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.991 [2024-10-13 17:46:55.738680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:05.991 [2024-10-13 17:46:55.738695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.638 ms 00:17:05.991 [2024-10-13 17:46:55.738704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.991 [2024-10-13 17:46:55.738854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.991 [2024-10-13 17:46:55.738869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:05.991 [2024-10-13 17:46:55.738880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:05.991 [2024-10-13 17:46:55.738888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.991 [2024-10-13 17:46:55.740816] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:05.991 [2024-10-13 17:46:55.744847] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 362.819 ms, result 0 00:17:05.991 [2024-10-13 17:46:55.746316] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:05.991 [2024-10-13 17:46:55.760191] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:07.377  [2024-10-13T17:46:58.142Z] Copying: 19/256 [MB] (19 MBps) [2024-10-13T17:46:59.091Z] Copying: 36/256 [MB] (17 MBps) [2024-10-13T17:47:00.036Z] Copying: 49/256 [MB] (12 MBps) [2024-10-13T17:47:00.979Z] Copying: 63/256 [MB] (13 MBps) [2024-10-13T17:47:01.924Z] Copying: 80/256 [MB] (17 MBps) [2024-10-13T17:47:02.894Z] Copying: 95/256 [MB] (14 MBps) [2024-10-13T17:47:03.904Z] Copying: 115/256 [MB] (20 MBps) [2024-10-13T17:47:04.848Z] Copying: 139/256 [MB] (24 MBps) [2024-10-13T17:47:05.793Z] Copying: 158/256 [MB] (18 MBps) [2024-10-13T17:47:07.178Z] Copying: 171/256 [MB] (13 MBps) [2024-10-13T17:47:08.129Z] Copying: 189/256 [MB] (18 MBps) [2024-10-13T17:47:09.073Z] Copying: 206/256 [MB] (16 MBps) [2024-10-13T17:47:10.016Z] Copying: 228/256 [MB] (22 MBps) [2024-10-13T17:47:10.016Z] Copying: 249/256 [MB] (21 MBps) [2024-10-13T17:47:10.016Z] Copying: 256/256 [MB] (average 18 MBps)[2024-10-13 17:47:09.982377] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:20.202 [2024-10-13 17:47:09.989971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.202 [2024-10-13 17:47:09.990003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:20.202 [2024-10-13 17:47:09.990015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:20.202 [2024-10-13 17:47:09.990022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.203 [2024-10-13 17:47:09.990040] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:20.203 [2024-10-13 17:47:09.992240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.203 [2024-10-13 17:47:09.992269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:20.203 [2024-10-13 17:47:09.992278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.189 ms 00:17:20.203 [2024-10-13 17:47:09.992284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.203 [2024-10-13 17:47:09.992481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.203 [2024-10-13 17:47:09.992492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:20.203 [2024-10-13 17:47:09.992499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:17:20.203 [2024-10-13 17:47:09.992505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.203 [2024-10-13 17:47:09.995300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.203 [2024-10-13 17:47:09.995315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:20.203 [2024-10-13 17:47:09.995323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.783 ms 00:17:20.203 [2024-10-13 17:47:09.995332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.203 [2024-10-13 17:47:10.000529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.203 [2024-10-13 17:47:10.000552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:20.203 [2024-10-13 17:47:10.000578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.182 ms 00:17:20.203 [2024-10-13 17:47:10.000584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.464 [2024-10-13 17:47:10.019214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.464 [2024-10-13 17:47:10.019245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:20.464 [2024-10-13 17:47:10.019255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.575 ms 00:17:20.464 [2024-10-13 17:47:10.019263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.464 [2024-10-13 17:47:10.031295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.464 [2024-10-13 17:47:10.031324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:20.464 [2024-10-13 17:47:10.031334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.986 ms 00:17:20.464 [2024-10-13 17:47:10.031347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.464 [2024-10-13 17:47:10.031450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.464 [2024-10-13 17:47:10.031458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:20.464 [2024-10-13 17:47:10.031466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:20.464 [2024-10-13 17:47:10.031472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.464 [2024-10-13 17:47:10.054276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.464 [2024-10-13 17:47:10.054308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:20.464 [2024-10-13 17:47:10.054317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.783 ms 00:17:20.464 [2024-10-13 17:47:10.054323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.464 [2024-10-13 17:47:10.072878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.464 [2024-10-13 17:47:10.072906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:20.464 [2024-10-13 17:47:10.072915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.522 ms 00:17:20.464 [2024-10-13 17:47:10.072921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.464 [2024-10-13 17:47:10.089951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.464 [2024-10-13 17:47:10.090079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:20.464 [2024-10-13 17:47:10.090093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.999 ms 00:17:20.464 [2024-10-13 17:47:10.090100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.464 [2024-10-13 17:47:10.108244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.464 [2024-10-13 17:47:10.108271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:20.464 [2024-10-13 17:47:10.108280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.092 ms 00:17:20.464 [2024-10-13 17:47:10.108286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.464 [2024-10-13 17:47:10.108315] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:20.464 [2024-10-13 17:47:10.108330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:20.464 [2024-10-13 17:47:10.108343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:20.465 [2024-10-13 17:47:10.108914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:20.466 [2024-10-13 17:47:10.108930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:20.466 [2024-10-13 17:47:10.108935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:20.466 [2024-10-13 17:47:10.108941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:20.466 [2024-10-13 17:47:10.108947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:20.466 [2024-10-13 17:47:10.108952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:20.466 [2024-10-13 17:47:10.108958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:20.466 [2024-10-13 17:47:10.108977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:20.466 [2024-10-13 17:47:10.108983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:20.466 [2024-10-13 17:47:10.108990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:20.466 [2024-10-13 17:47:10.108997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:20.466 [2024-10-13 17:47:10.109003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:20.466 [2024-10-13 17:47:10.109016] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:20.466 [2024-10-13 17:47:10.109023] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 74767a50-ba29-46ab-9b2a-5c33f6d9a290 00:17:20.466 [2024-10-13 17:47:10.109030] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:20.466 [2024-10-13 17:47:10.109036] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:20.466 [2024-10-13 17:47:10.109042] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:20.466 [2024-10-13 17:47:10.109048] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:20.466 [2024-10-13 17:47:10.109055] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:20.466 [2024-10-13 17:47:10.109063] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:20.466 [2024-10-13 17:47:10.109070] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:20.466 [2024-10-13 17:47:10.109075] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:20.466 [2024-10-13 17:47:10.109080] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:20.466 [2024-10-13 17:47:10.109086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.466 [2024-10-13 17:47:10.109092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:20.466 [2024-10-13 17:47:10.109099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.772 ms 00:17:20.466 [2024-10-13 17:47:10.109106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.466 [2024-10-13 17:47:10.119380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.466 [2024-10-13 17:47:10.119405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:20.466 [2024-10-13 17:47:10.119413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.261 ms 00:17:20.466 [2024-10-13 17:47:10.119420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.466 [2024-10-13 17:47:10.119759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.466 [2024-10-13 17:47:10.119768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:20.466 [2024-10-13 17:47:10.119780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:17:20.466 [2024-10-13 17:47:10.119786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.466 [2024-10-13 17:47:10.149154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.466 [2024-10-13 17:47:10.149184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:20.466 [2024-10-13 17:47:10.149192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.466 [2024-10-13 17:47:10.149200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.466 [2024-10-13 17:47:10.149269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.466 [2024-10-13 17:47:10.149278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:20.466 [2024-10-13 17:47:10.149286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.466 [2024-10-13 17:47:10.149292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.466 [2024-10-13 17:47:10.149332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.466 [2024-10-13 17:47:10.149340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:20.466 [2024-10-13 17:47:10.149348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.466 [2024-10-13 17:47:10.149354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.466 [2024-10-13 17:47:10.149368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.466 [2024-10-13 17:47:10.149375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:20.466 [2024-10-13 17:47:10.149381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.466 [2024-10-13 17:47:10.149389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.466 [2024-10-13 17:47:10.212937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.466 [2024-10-13 17:47:10.212970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:20.466 [2024-10-13 17:47:10.212980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.466 [2024-10-13 17:47:10.212986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.466 [2024-10-13 17:47:10.265077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.466 [2024-10-13 17:47:10.265114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:20.466 [2024-10-13 17:47:10.265128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.466 [2024-10-13 17:47:10.265134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.466 [2024-10-13 17:47:10.265205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.466 [2024-10-13 17:47:10.265213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:20.466 [2024-10-13 17:47:10.265220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.466 [2024-10-13 17:47:10.265226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.466 [2024-10-13 17:47:10.265252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.466 [2024-10-13 17:47:10.265259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:20.466 [2024-10-13 17:47:10.265265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.466 [2024-10-13 17:47:10.265271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.466 [2024-10-13 17:47:10.265346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.466 [2024-10-13 17:47:10.265354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:20.466 [2024-10-13 17:47:10.265360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.466 [2024-10-13 17:47:10.265367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.466 [2024-10-13 17:47:10.265397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.466 [2024-10-13 17:47:10.265404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:20.466 [2024-10-13 17:47:10.265411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.466 [2024-10-13 17:47:10.265417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.466 [2024-10-13 17:47:10.265458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.466 [2024-10-13 17:47:10.265466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:20.466 [2024-10-13 17:47:10.265473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.466 [2024-10-13 17:47:10.265480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.466 [2024-10-13 17:47:10.265521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.466 [2024-10-13 17:47:10.265530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:20.466 [2024-10-13 17:47:10.265537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.466 [2024-10-13 17:47:10.265543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.466 [2024-10-13 17:47:10.265687] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 275.700 ms, result 0 00:17:21.038 00:17:21.038 00:17:21.038 17:47:10 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:21.299 17:47:10 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:21.871 17:47:11 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:21.871 [2024-10-13 17:47:11.518903] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:17:21.871 [2024-10-13 17:47:11.519067] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74656 ] 00:17:21.871 [2024-10-13 17:47:11.673370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.133 [2024-10-13 17:47:11.787281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.395 [2024-10-13 17:47:12.015886] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:22.395 [2024-10-13 17:47:12.015941] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:22.395 [2024-10-13 17:47:12.169696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.395 [2024-10-13 17:47:12.169735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:22.395 [2024-10-13 17:47:12.169747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:22.395 [2024-10-13 17:47:12.169753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.395 [2024-10-13 17:47:12.171914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.395 [2024-10-13 17:47:12.171944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:22.395 [2024-10-13 17:47:12.171952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.148 ms 00:17:22.395 [2024-10-13 17:47:12.171959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.395 [2024-10-13 17:47:12.172018] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:22.395 [2024-10-13 17:47:12.172546] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:22.395 [2024-10-13 17:47:12.172571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.395 [2024-10-13 17:47:12.172578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:22.395 [2024-10-13 17:47:12.172585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.559 ms 00:17:22.395 [2024-10-13 17:47:12.172591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.395 [2024-10-13 17:47:12.173930] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:22.395 [2024-10-13 17:47:12.184113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.395 [2024-10-13 17:47:12.184156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:22.395 [2024-10-13 17:47:12.184166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.184 ms 00:17:22.395 [2024-10-13 17:47:12.184175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.395 [2024-10-13 17:47:12.184246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.395 [2024-10-13 17:47:12.184255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:22.395 [2024-10-13 17:47:12.184262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:22.395 [2024-10-13 17:47:12.184267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.395 [2024-10-13 17:47:12.190619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.395 [2024-10-13 17:47:12.190750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:22.395 [2024-10-13 17:47:12.190763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.320 ms 00:17:22.395 [2024-10-13 17:47:12.190769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.396 [2024-10-13 17:47:12.190847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.396 [2024-10-13 17:47:12.190855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:22.396 [2024-10-13 17:47:12.190862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:22.396 [2024-10-13 17:47:12.190868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.396 [2024-10-13 17:47:12.190884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.396 [2024-10-13 17:47:12.190891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:22.396 [2024-10-13 17:47:12.190897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:22.396 [2024-10-13 17:47:12.190905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.396 [2024-10-13 17:47:12.190924] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:22.396 [2024-10-13 17:47:12.193870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.396 [2024-10-13 17:47:12.193968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:22.396 [2024-10-13 17:47:12.193980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.951 ms 00:17:22.396 [2024-10-13 17:47:12.193991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.396 [2024-10-13 17:47:12.194025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.396 [2024-10-13 17:47:12.194031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:22.396 [2024-10-13 17:47:12.194037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:22.396 [2024-10-13 17:47:12.194043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.396 [2024-10-13 17:47:12.194057] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:22.396 [2024-10-13 17:47:12.194072] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:22.396 [2024-10-13 17:47:12.194102] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:22.396 [2024-10-13 17:47:12.194115] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:22.396 [2024-10-13 17:47:12.194197] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:22.396 [2024-10-13 17:47:12.194206] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:22.396 [2024-10-13 17:47:12.194215] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:22.396 [2024-10-13 17:47:12.194223] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:22.396 [2024-10-13 17:47:12.194230] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:22.396 [2024-10-13 17:47:12.194236] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:22.396 [2024-10-13 17:47:12.194245] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:22.396 [2024-10-13 17:47:12.194250] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:22.396 [2024-10-13 17:47:12.194256] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:22.396 [2024-10-13 17:47:12.194263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.396 [2024-10-13 17:47:12.194269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:22.396 [2024-10-13 17:47:12.194275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:17:22.396 [2024-10-13 17:47:12.194280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.396 [2024-10-13 17:47:12.194359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.396 [2024-10-13 17:47:12.194367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:22.396 [2024-10-13 17:47:12.194373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:17:22.396 [2024-10-13 17:47:12.194380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.396 [2024-10-13 17:47:12.194456] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:22.396 [2024-10-13 17:47:12.194464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:22.396 [2024-10-13 17:47:12.194470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:22.396 [2024-10-13 17:47:12.194476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.396 [2024-10-13 17:47:12.194482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:22.396 [2024-10-13 17:47:12.194488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:22.396 [2024-10-13 17:47:12.194493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:22.396 [2024-10-13 17:47:12.194498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:22.396 [2024-10-13 17:47:12.194504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:22.396 [2024-10-13 17:47:12.194509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:22.396 [2024-10-13 17:47:12.194515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:22.396 [2024-10-13 17:47:12.194520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:22.396 [2024-10-13 17:47:12.194526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:22.396 [2024-10-13 17:47:12.194536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:22.396 [2024-10-13 17:47:12.194541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:22.396 [2024-10-13 17:47:12.194546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.396 [2024-10-13 17:47:12.194551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:22.396 [2024-10-13 17:47:12.194574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:22.396 [2024-10-13 17:47:12.194579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.396 [2024-10-13 17:47:12.194586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:22.396 [2024-10-13 17:47:12.194592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:22.396 [2024-10-13 17:47:12.194597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:22.396 [2024-10-13 17:47:12.194602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:22.396 [2024-10-13 17:47:12.194608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:22.396 [2024-10-13 17:47:12.194613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:22.396 [2024-10-13 17:47:12.194619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:22.396 [2024-10-13 17:47:12.194624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:22.396 [2024-10-13 17:47:12.194630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:22.396 [2024-10-13 17:47:12.194635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:22.396 [2024-10-13 17:47:12.194641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:22.396 [2024-10-13 17:47:12.194646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:22.396 [2024-10-13 17:47:12.194652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:22.396 [2024-10-13 17:47:12.194657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:22.396 [2024-10-13 17:47:12.194662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:22.396 [2024-10-13 17:47:12.194668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:22.396 [2024-10-13 17:47:12.194673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:22.396 [2024-10-13 17:47:12.194678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:22.396 [2024-10-13 17:47:12.194683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:22.396 [2024-10-13 17:47:12.194688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:22.396 [2024-10-13 17:47:12.194693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.396 [2024-10-13 17:47:12.194699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:22.396 [2024-10-13 17:47:12.194704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:22.396 [2024-10-13 17:47:12.194709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.396 [2024-10-13 17:47:12.194714] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:22.396 [2024-10-13 17:47:12.194721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:22.396 [2024-10-13 17:47:12.194727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:22.396 [2024-10-13 17:47:12.194733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.396 [2024-10-13 17:47:12.194739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:22.396 [2024-10-13 17:47:12.194744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:22.396 [2024-10-13 17:47:12.194752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:22.396 [2024-10-13 17:47:12.194757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:22.396 [2024-10-13 17:47:12.194763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:22.396 [2024-10-13 17:47:12.194768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:22.396 [2024-10-13 17:47:12.194775] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:22.396 [2024-10-13 17:47:12.194784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:22.396 [2024-10-13 17:47:12.194791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:22.396 [2024-10-13 17:47:12.194805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:22.396 [2024-10-13 17:47:12.194810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:22.396 [2024-10-13 17:47:12.194817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:22.396 [2024-10-13 17:47:12.194823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:22.396 [2024-10-13 17:47:12.194829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:22.396 [2024-10-13 17:47:12.194834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:22.396 [2024-10-13 17:47:12.194839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:22.396 [2024-10-13 17:47:12.194845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:22.396 [2024-10-13 17:47:12.194850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:22.396 [2024-10-13 17:47:12.194855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:22.396 [2024-10-13 17:47:12.194861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:22.396 [2024-10-13 17:47:12.194866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:22.397 [2024-10-13 17:47:12.194872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:22.397 [2024-10-13 17:47:12.194877] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:22.397 [2024-10-13 17:47:12.194884] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:22.397 [2024-10-13 17:47:12.194890] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:22.397 [2024-10-13 17:47:12.194896] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:22.397 [2024-10-13 17:47:12.194902] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:22.397 [2024-10-13 17:47:12.194907] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:22.397 [2024-10-13 17:47:12.194913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.397 [2024-10-13 17:47:12.194920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:22.397 [2024-10-13 17:47:12.194926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.509 ms 00:17:22.397 [2024-10-13 17:47:12.194934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.659 [2024-10-13 17:47:12.219261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.659 [2024-10-13 17:47:12.219291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:22.659 [2024-10-13 17:47:12.219301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.275 ms 00:17:22.659 [2024-10-13 17:47:12.219308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.659 [2024-10-13 17:47:12.219407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.659 [2024-10-13 17:47:12.219416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:22.659 [2024-10-13 17:47:12.219422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:22.659 [2024-10-13 17:47:12.219431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.659 [2024-10-13 17:47:12.258292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.659 [2024-10-13 17:47:12.258325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:22.659 [2024-10-13 17:47:12.258335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.844 ms 00:17:22.659 [2024-10-13 17:47:12.258342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.659 [2024-10-13 17:47:12.258403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.659 [2024-10-13 17:47:12.258412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:22.659 [2024-10-13 17:47:12.258419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:22.659 [2024-10-13 17:47:12.258426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.659 [2024-10-13 17:47:12.258835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.659 [2024-10-13 17:47:12.258848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:22.659 [2024-10-13 17:47:12.258855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:17:22.659 [2024-10-13 17:47:12.258861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.659 [2024-10-13 17:47:12.258980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.659 [2024-10-13 17:47:12.258992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:22.659 [2024-10-13 17:47:12.258999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:17:22.659 [2024-10-13 17:47:12.259005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.659 [2024-10-13 17:47:12.271273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.659 [2024-10-13 17:47:12.271303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:22.659 [2024-10-13 17:47:12.271311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.251 ms 00:17:22.659 [2024-10-13 17:47:12.271317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.659 [2024-10-13 17:47:12.281742] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:22.659 [2024-10-13 17:47:12.281770] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:22.659 [2024-10-13 17:47:12.281780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.659 [2024-10-13 17:47:12.281787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:22.659 [2024-10-13 17:47:12.281794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.361 ms 00:17:22.659 [2024-10-13 17:47:12.281800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.659 [2024-10-13 17:47:12.300336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.659 [2024-10-13 17:47:12.300369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:22.659 [2024-10-13 17:47:12.300379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.475 ms 00:17:22.659 [2024-10-13 17:47:12.300385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.659 [2024-10-13 17:47:12.309410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.659 [2024-10-13 17:47:12.309435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:22.659 [2024-10-13 17:47:12.309444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.968 ms 00:17:22.659 [2024-10-13 17:47:12.309449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.659 [2024-10-13 17:47:12.318240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.659 [2024-10-13 17:47:12.318264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:22.659 [2024-10-13 17:47:12.318272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.750 ms 00:17:22.659 [2024-10-13 17:47:12.318277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.659 [2024-10-13 17:47:12.318745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.659 [2024-10-13 17:47:12.318860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:22.659 [2024-10-13 17:47:12.318872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:17:22.659 [2024-10-13 17:47:12.318879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.659 [2024-10-13 17:47:12.368179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.659 [2024-10-13 17:47:12.368226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:22.659 [2024-10-13 17:47:12.368238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.277 ms 00:17:22.659 [2024-10-13 17:47:12.368245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.659 [2024-10-13 17:47:12.376471] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:22.659 [2024-10-13 17:47:12.391325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.659 [2024-10-13 17:47:12.391362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:22.659 [2024-10-13 17:47:12.391373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.006 ms 00:17:22.659 [2024-10-13 17:47:12.391379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.659 [2024-10-13 17:47:12.391478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.659 [2024-10-13 17:47:12.391489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:22.659 [2024-10-13 17:47:12.391496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:22.659 [2024-10-13 17:47:12.391503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.659 [2024-10-13 17:47:12.391549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.659 [2024-10-13 17:47:12.391578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:22.659 [2024-10-13 17:47:12.391586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:22.659 [2024-10-13 17:47:12.391593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.659 [2024-10-13 17:47:12.391611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.659 [2024-10-13 17:47:12.391620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:22.659 [2024-10-13 17:47:12.391629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:22.659 [2024-10-13 17:47:12.391635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.659 [2024-10-13 17:47:12.391665] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:22.659 [2024-10-13 17:47:12.391673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.659 [2024-10-13 17:47:12.391680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:22.659 [2024-10-13 17:47:12.391686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:22.659 [2024-10-13 17:47:12.391692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.659 [2024-10-13 17:47:12.410607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.659 [2024-10-13 17:47:12.410640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:22.659 [2024-10-13 17:47:12.410648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.900 ms 00:17:22.659 [2024-10-13 17:47:12.410655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.659 [2024-10-13 17:47:12.410733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.659 [2024-10-13 17:47:12.410742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:22.659 [2024-10-13 17:47:12.410750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:22.659 [2024-10-13 17:47:12.410756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.659 [2024-10-13 17:47:12.411528] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:22.659 [2024-10-13 17:47:12.413906] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 241.562 ms, result 0 00:17:22.659 [2024-10-13 17:47:12.415112] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:22.659 [2024-10-13 17:47:12.425871] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:23.234  [2024-10-13T17:47:13.048Z] Copying: 4096/4096 [kB] (average 11 MBps)[2024-10-13 17:47:12.770636] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:23.234 [2024-10-13 17:47:12.777032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.234 [2024-10-13 17:47:12.777148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:23.234 [2024-10-13 17:47:12.777162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:23.234 [2024-10-13 17:47:12.777168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.234 [2024-10-13 17:47:12.777187] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:23.234 [2024-10-13 17:47:12.779364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.234 [2024-10-13 17:47:12.779390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:23.234 [2024-10-13 17:47:12.779397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.167 ms 00:17:23.234 [2024-10-13 17:47:12.779404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.234 [2024-10-13 17:47:12.781314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.234 [2024-10-13 17:47:12.781341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:23.234 [2024-10-13 17:47:12.781349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.893 ms 00:17:23.234 [2024-10-13 17:47:12.781355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.234 [2024-10-13 17:47:12.784759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.234 [2024-10-13 17:47:12.784783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:23.234 [2024-10-13 17:47:12.784790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.391 ms 00:17:23.234 [2024-10-13 17:47:12.784799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.234 [2024-10-13 17:47:12.790120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.234 [2024-10-13 17:47:12.790142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:23.234 [2024-10-13 17:47:12.790150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.299 ms 00:17:23.234 [2024-10-13 17:47:12.790157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.234 [2024-10-13 17:47:12.807700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.234 [2024-10-13 17:47:12.807807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:23.234 [2024-10-13 17:47:12.807821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.497 ms 00:17:23.234 [2024-10-13 17:47:12.807826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.234 [2024-10-13 17:47:12.819621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.234 [2024-10-13 17:47:12.819646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:23.234 [2024-10-13 17:47:12.819655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.769 ms 00:17:23.234 [2024-10-13 17:47:12.819666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.234 [2024-10-13 17:47:12.819765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.234 [2024-10-13 17:47:12.819772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:23.234 [2024-10-13 17:47:12.819778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:23.234 [2024-10-13 17:47:12.819783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.234 [2024-10-13 17:47:12.837946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.234 [2024-10-13 17:47:12.837969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:23.234 [2024-10-13 17:47:12.837976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.145 ms 00:17:23.234 [2024-10-13 17:47:12.837981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.234 [2024-10-13 17:47:12.855284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.234 [2024-10-13 17:47:12.855385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:23.234 [2024-10-13 17:47:12.855397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.267 ms 00:17:23.234 [2024-10-13 17:47:12.855402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.234 [2024-10-13 17:47:12.872162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.234 [2024-10-13 17:47:12.872188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:23.234 [2024-10-13 17:47:12.872195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.698 ms 00:17:23.234 [2024-10-13 17:47:12.872201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.234 [2024-10-13 17:47:12.889761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.234 [2024-10-13 17:47:12.889864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:23.234 [2024-10-13 17:47:12.889875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.512 ms 00:17:23.234 [2024-10-13 17:47:12.889880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.234 [2024-10-13 17:47:12.889905] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:23.234 [2024-10-13 17:47:12.889916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.889928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.889934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.889940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.889946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.889951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.889957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.889963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.889968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.889974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.889979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.889984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.889990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.889996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.890001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.890007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.890012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.890018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.890023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.890029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.890034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.890040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.890045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.890051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.890056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.890062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.890068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.890074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.890079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.890086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.890092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.890098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.890104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.890110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.890116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.890122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.890127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.890133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.890138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.890144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.890150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.890155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.890161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.890166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.890172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:23.234 [2024-10-13 17:47:12.890177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:23.235 [2024-10-13 17:47:12.890502] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:23.235 [2024-10-13 17:47:12.890508] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 74767a50-ba29-46ab-9b2a-5c33f6d9a290 00:17:23.235 [2024-10-13 17:47:12.890514] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:23.235 [2024-10-13 17:47:12.890519] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:23.235 [2024-10-13 17:47:12.890524] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:23.235 [2024-10-13 17:47:12.890530] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:23.235 [2024-10-13 17:47:12.890535] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:23.235 [2024-10-13 17:47:12.890541] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:23.235 [2024-10-13 17:47:12.890546] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:23.235 [2024-10-13 17:47:12.890551] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:23.235 [2024-10-13 17:47:12.890573] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:23.235 [2024-10-13 17:47:12.890579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.235 [2024-10-13 17:47:12.890585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:23.235 [2024-10-13 17:47:12.890592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.675 ms 00:17:23.235 [2024-10-13 17:47:12.890599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.235 [2024-10-13 17:47:12.900262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.235 [2024-10-13 17:47:12.900287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:23.235 [2024-10-13 17:47:12.900296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.650 ms 00:17:23.235 [2024-10-13 17:47:12.900302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.235 [2024-10-13 17:47:12.900619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.235 [2024-10-13 17:47:12.900627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:23.235 [2024-10-13 17:47:12.900637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:17:23.235 [2024-10-13 17:47:12.900643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.235 [2024-10-13 17:47:12.929729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.235 [2024-10-13 17:47:12.929758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:23.235 [2024-10-13 17:47:12.929767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.235 [2024-10-13 17:47:12.929774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.235 [2024-10-13 17:47:12.929829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.235 [2024-10-13 17:47:12.929836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:23.235 [2024-10-13 17:47:12.929846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.235 [2024-10-13 17:47:12.929853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.235 [2024-10-13 17:47:12.929885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.235 [2024-10-13 17:47:12.929892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:23.235 [2024-10-13 17:47:12.929899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.235 [2024-10-13 17:47:12.929905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.235 [2024-10-13 17:47:12.929918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.235 [2024-10-13 17:47:12.929924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:23.235 [2024-10-13 17:47:12.929931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.235 [2024-10-13 17:47:12.929939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.235 [2024-10-13 17:47:12.992253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.235 [2024-10-13 17:47:12.992383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:23.235 [2024-10-13 17:47:12.992399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.235 [2024-10-13 17:47:12.992407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.235 [2024-10-13 17:47:13.044345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.235 [2024-10-13 17:47:13.044382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:23.235 [2024-10-13 17:47:13.044395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.235 [2024-10-13 17:47:13.044402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.235 [2024-10-13 17:47:13.044474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.235 [2024-10-13 17:47:13.044483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:23.236 [2024-10-13 17:47:13.044490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.236 [2024-10-13 17:47:13.044496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.236 [2024-10-13 17:47:13.044520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.236 [2024-10-13 17:47:13.044527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:23.236 [2024-10-13 17:47:13.044534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.236 [2024-10-13 17:47:13.044540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.236 [2024-10-13 17:47:13.044633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.236 [2024-10-13 17:47:13.044642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:23.236 [2024-10-13 17:47:13.044649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.236 [2024-10-13 17:47:13.044655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.236 [2024-10-13 17:47:13.044681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.236 [2024-10-13 17:47:13.044689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:23.236 [2024-10-13 17:47:13.044695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.236 [2024-10-13 17:47:13.044702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.236 [2024-10-13 17:47:13.044742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.236 [2024-10-13 17:47:13.044750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:23.236 [2024-10-13 17:47:13.044756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.236 [2024-10-13 17:47:13.044763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.236 [2024-10-13 17:47:13.044805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.236 [2024-10-13 17:47:13.044812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:23.236 [2024-10-13 17:47:13.044818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.236 [2024-10-13 17:47:13.044824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.236 [2024-10-13 17:47:13.044952] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 267.903 ms, result 0 00:17:23.808 00:17:23.808 00:17:24.069 17:47:13 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=74681 00:17:24.069 17:47:13 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:24.069 17:47:13 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 74681 00:17:24.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.069 17:47:13 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 74681 ']' 00:17:24.069 17:47:13 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.069 17:47:13 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:24.069 17:47:13 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.069 17:47:13 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:24.069 17:47:13 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:24.069 [2024-10-13 17:47:13.715978] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:17:24.069 [2024-10-13 17:47:13.716268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74681 ] 00:17:24.069 [2024-10-13 17:47:13.866771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.330 [2024-10-13 17:47:13.963578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.901 17:47:14 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:24.901 17:47:14 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:24.901 17:47:14 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:25.164 [2024-10-13 17:47:14.753866] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:25.164 [2024-10-13 17:47:14.753919] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:25.164 [2024-10-13 17:47:14.930102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.164 [2024-10-13 17:47:14.930268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:25.164 [2024-10-13 17:47:14.930295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:25.164 [2024-10-13 17:47:14.930309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.164 [2024-10-13 17:47:14.933165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.164 [2024-10-13 17:47:14.933204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:25.164 [2024-10-13 17:47:14.933217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.832 ms 00:17:25.164 [2024-10-13 17:47:14.933224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.164 [2024-10-13 17:47:14.933328] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:25.164 [2024-10-13 17:47:14.934173] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:25.164 [2024-10-13 17:47:14.934317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.164 [2024-10-13 17:47:14.934370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:25.164 [2024-10-13 17:47:14.934396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.998 ms 00:17:25.164 [2024-10-13 17:47:14.934416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.164 [2024-10-13 17:47:14.936037] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:25.164 [2024-10-13 17:47:14.949977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.164 [2024-10-13 17:47:14.950113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:25.164 [2024-10-13 17:47:14.950131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.946 ms 00:17:25.164 [2024-10-13 17:47:14.950141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.164 [2024-10-13 17:47:14.950227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.164 [2024-10-13 17:47:14.950240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:25.164 [2024-10-13 17:47:14.950249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:25.164 [2024-10-13 17:47:14.950259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.164 [2024-10-13 17:47:14.958245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.164 [2024-10-13 17:47:14.958285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:25.164 [2024-10-13 17:47:14.958296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.935 ms 00:17:25.164 [2024-10-13 17:47:14.958305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.164 [2024-10-13 17:47:14.958419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.164 [2024-10-13 17:47:14.958432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:25.164 [2024-10-13 17:47:14.958441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:17:25.164 [2024-10-13 17:47:14.958452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.164 [2024-10-13 17:47:14.958478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.164 [2024-10-13 17:47:14.958492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:25.164 [2024-10-13 17:47:14.958500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:25.164 [2024-10-13 17:47:14.958509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.164 [2024-10-13 17:47:14.958533] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:25.164 [2024-10-13 17:47:14.962436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.164 [2024-10-13 17:47:14.962468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:25.164 [2024-10-13 17:47:14.962481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.906 ms 00:17:25.164 [2024-10-13 17:47:14.962489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.164 [2024-10-13 17:47:14.962574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.164 [2024-10-13 17:47:14.962585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:25.164 [2024-10-13 17:47:14.962596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:25.164 [2024-10-13 17:47:14.962604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.164 [2024-10-13 17:47:14.962628] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:25.164 [2024-10-13 17:47:14.962651] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:25.164 [2024-10-13 17:47:14.962695] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:25.164 [2024-10-13 17:47:14.962711] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:25.164 [2024-10-13 17:47:14.962822] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:25.164 [2024-10-13 17:47:14.962834] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:25.164 [2024-10-13 17:47:14.962848] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:25.164 [2024-10-13 17:47:14.962859] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:25.164 [2024-10-13 17:47:14.962872] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:25.164 [2024-10-13 17:47:14.962881] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:25.164 [2024-10-13 17:47:14.962890] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:25.164 [2024-10-13 17:47:14.962898] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:25.164 [2024-10-13 17:47:14.962908] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:25.164 [2024-10-13 17:47:14.962917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.165 [2024-10-13 17:47:14.962926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:25.165 [2024-10-13 17:47:14.962934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:17:25.165 [2024-10-13 17:47:14.962943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.165 [2024-10-13 17:47:14.963030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.165 [2024-10-13 17:47:14.963040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:25.165 [2024-10-13 17:47:14.963050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:25.165 [2024-10-13 17:47:14.963058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.165 [2024-10-13 17:47:14.963158] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:25.165 [2024-10-13 17:47:14.963170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:25.165 [2024-10-13 17:47:14.963179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:25.165 [2024-10-13 17:47:14.963189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.165 [2024-10-13 17:47:14.963197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:25.165 [2024-10-13 17:47:14.963206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:25.165 [2024-10-13 17:47:14.963213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:25.165 [2024-10-13 17:47:14.963226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:25.165 [2024-10-13 17:47:14.963233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:25.165 [2024-10-13 17:47:14.963242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:25.165 [2024-10-13 17:47:14.963248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:25.165 [2024-10-13 17:47:14.963256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:25.165 [2024-10-13 17:47:14.963263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:25.165 [2024-10-13 17:47:14.963271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:25.165 [2024-10-13 17:47:14.963282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:25.165 [2024-10-13 17:47:14.963291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.165 [2024-10-13 17:47:14.963297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:25.165 [2024-10-13 17:47:14.963306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:25.165 [2024-10-13 17:47:14.963313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.165 [2024-10-13 17:47:14.963322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:25.165 [2024-10-13 17:47:14.963335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:25.165 [2024-10-13 17:47:14.963344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:25.165 [2024-10-13 17:47:14.963351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:25.165 [2024-10-13 17:47:14.963361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:25.165 [2024-10-13 17:47:14.963368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:25.165 [2024-10-13 17:47:14.963376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:25.165 [2024-10-13 17:47:14.963383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:25.165 [2024-10-13 17:47:14.963392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:25.165 [2024-10-13 17:47:14.963399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:25.165 [2024-10-13 17:47:14.963408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:25.165 [2024-10-13 17:47:14.963414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:25.165 [2024-10-13 17:47:14.963424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:25.165 [2024-10-13 17:47:14.963430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:25.165 [2024-10-13 17:47:14.963438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:25.165 [2024-10-13 17:47:14.963445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:25.165 [2024-10-13 17:47:14.963453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:25.165 [2024-10-13 17:47:14.963459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:25.165 [2024-10-13 17:47:14.963468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:25.165 [2024-10-13 17:47:14.963474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:25.165 [2024-10-13 17:47:14.963484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.165 [2024-10-13 17:47:14.963492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:25.165 [2024-10-13 17:47:14.963500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:25.165 [2024-10-13 17:47:14.963507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.165 [2024-10-13 17:47:14.963515] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:25.165 [2024-10-13 17:47:14.963523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:25.165 [2024-10-13 17:47:14.963532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:25.165 [2024-10-13 17:47:14.963543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.165 [2024-10-13 17:47:14.963553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:25.165 [2024-10-13 17:47:14.963573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:25.165 [2024-10-13 17:47:14.963582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:25.165 [2024-10-13 17:47:14.963589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:25.165 [2024-10-13 17:47:14.963598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:25.165 [2024-10-13 17:47:14.963605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:25.165 [2024-10-13 17:47:14.963615] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:25.165 [2024-10-13 17:47:14.963625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:25.165 [2024-10-13 17:47:14.963639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:25.165 [2024-10-13 17:47:14.963647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:25.165 [2024-10-13 17:47:14.963656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:25.165 [2024-10-13 17:47:14.963664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:25.165 [2024-10-13 17:47:14.963673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:25.165 [2024-10-13 17:47:14.963680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:25.165 [2024-10-13 17:47:14.963689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:25.165 [2024-10-13 17:47:14.963697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:25.165 [2024-10-13 17:47:14.963706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:25.165 [2024-10-13 17:47:14.963713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:25.165 [2024-10-13 17:47:14.963722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:25.165 [2024-10-13 17:47:14.963730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:25.165 [2024-10-13 17:47:14.963739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:25.165 [2024-10-13 17:47:14.963747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:25.165 [2024-10-13 17:47:14.963756] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:25.165 [2024-10-13 17:47:14.963765] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:25.165 [2024-10-13 17:47:14.963776] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:25.165 [2024-10-13 17:47:14.963784] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:25.165 [2024-10-13 17:47:14.963793] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:25.165 [2024-10-13 17:47:14.963801] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:25.165 [2024-10-13 17:47:14.963810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.165 [2024-10-13 17:47:14.963817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:25.165 [2024-10-13 17:47:14.963827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.719 ms 00:17:25.165 [2024-10-13 17:47:14.963836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.427 [2024-10-13 17:47:14.997282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.427 [2024-10-13 17:47:14.997474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:25.427 [2024-10-13 17:47:14.997497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.383 ms 00:17:25.427 [2024-10-13 17:47:14.997506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.427 [2024-10-13 17:47:14.997673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.427 [2024-10-13 17:47:14.997689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:25.427 [2024-10-13 17:47:14.997701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:25.427 [2024-10-13 17:47:14.997709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.427 [2024-10-13 17:47:15.035762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.427 [2024-10-13 17:47:15.035933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:25.427 [2024-10-13 17:47:15.035958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.024 ms 00:17:25.427 [2024-10-13 17:47:15.035970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.427 [2024-10-13 17:47:15.036068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.427 [2024-10-13 17:47:15.036080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:25.427 [2024-10-13 17:47:15.036091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:25.427 [2024-10-13 17:47:15.036099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.427 [2024-10-13 17:47:15.036838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.427 [2024-10-13 17:47:15.036861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:25.427 [2024-10-13 17:47:15.036875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:17:25.427 [2024-10-13 17:47:15.036883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.427 [2024-10-13 17:47:15.037057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.427 [2024-10-13 17:47:15.037069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:25.427 [2024-10-13 17:47:15.037081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:17:25.427 [2024-10-13 17:47:15.037089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.427 [2024-10-13 17:47:15.058010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.427 [2024-10-13 17:47:15.058052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:25.427 [2024-10-13 17:47:15.058065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.893 ms 00:17:25.427 [2024-10-13 17:47:15.058074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.427 [2024-10-13 17:47:15.073019] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:25.427 [2024-10-13 17:47:15.073063] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:25.427 [2024-10-13 17:47:15.073078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.427 [2024-10-13 17:47:15.073088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:25.427 [2024-10-13 17:47:15.073101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.883 ms 00:17:25.427 [2024-10-13 17:47:15.073108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.427 [2024-10-13 17:47:15.099244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.427 [2024-10-13 17:47:15.099292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:25.427 [2024-10-13 17:47:15.099308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.039 ms 00:17:25.427 [2024-10-13 17:47:15.099317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.427 [2024-10-13 17:47:15.112503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.427 [2024-10-13 17:47:15.112550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:25.427 [2024-10-13 17:47:15.112584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.087 ms 00:17:25.427 [2024-10-13 17:47:15.112592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.427 [2024-10-13 17:47:15.125067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.427 [2024-10-13 17:47:15.125109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:25.427 [2024-10-13 17:47:15.125124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.368 ms 00:17:25.427 [2024-10-13 17:47:15.125131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.427 [2024-10-13 17:47:15.125840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.427 [2024-10-13 17:47:15.125863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:25.427 [2024-10-13 17:47:15.125877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.587 ms 00:17:25.427 [2024-10-13 17:47:15.125886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.427 [2024-10-13 17:47:15.212188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.427 [2024-10-13 17:47:15.212417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:25.427 [2024-10-13 17:47:15.212453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.270 ms 00:17:25.427 [2024-10-13 17:47:15.212464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.427 [2024-10-13 17:47:15.224750] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:25.688 [2024-10-13 17:47:15.249633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.688 [2024-10-13 17:47:15.249698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:25.688 [2024-10-13 17:47:15.249715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.963 ms 00:17:25.688 [2024-10-13 17:47:15.249727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.688 [2024-10-13 17:47:15.249841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.688 [2024-10-13 17:47:15.249857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:25.688 [2024-10-13 17:47:15.249867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:25.688 [2024-10-13 17:47:15.249878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.688 [2024-10-13 17:47:15.249951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.688 [2024-10-13 17:47:15.249964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:25.688 [2024-10-13 17:47:15.249973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:17:25.688 [2024-10-13 17:47:15.249985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.688 [2024-10-13 17:47:15.250015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.688 [2024-10-13 17:47:15.250031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:25.688 [2024-10-13 17:47:15.250041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:25.688 [2024-10-13 17:47:15.250054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.688 [2024-10-13 17:47:15.250094] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:25.688 [2024-10-13 17:47:15.250112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.688 [2024-10-13 17:47:15.250121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:25.688 [2024-10-13 17:47:15.250133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:25.688 [2024-10-13 17:47:15.250147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.688 [2024-10-13 17:47:15.276610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.688 [2024-10-13 17:47:15.276660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:25.688 [2024-10-13 17:47:15.276678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.431 ms 00:17:25.688 [2024-10-13 17:47:15.276688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.688 [2024-10-13 17:47:15.276815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.688 [2024-10-13 17:47:15.276827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:25.688 [2024-10-13 17:47:15.276840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:25.688 [2024-10-13 17:47:15.276849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.688 [2024-10-13 17:47:15.278204] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:25.688 [2024-10-13 17:47:15.281698] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 347.643 ms, result 0 00:17:25.688 [2024-10-13 17:47:15.284110] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:25.688 Some configs were skipped because the RPC state that can call them passed over. 00:17:25.688 17:47:15 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:25.950 [2024-10-13 17:47:15.536537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.950 [2024-10-13 17:47:15.536765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:25.950 [2024-10-13 17:47:15.536945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.794 ms 00:17:25.950 [2024-10-13 17:47:15.536993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.950 [2024-10-13 17:47:15.537065] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.325 ms, result 0 00:17:25.950 true 00:17:25.950 17:47:15 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:25.950 [2024-10-13 17:47:15.756902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.950 [2024-10-13 17:47:15.757088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:25.950 [2024-10-13 17:47:15.757155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.906 ms 00:17:25.950 [2024-10-13 17:47:15.757181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.950 [2024-10-13 17:47:15.757245] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.249 ms, result 0 00:17:25.950 true 00:17:26.211 17:47:15 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 74681 00:17:26.211 17:47:15 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 74681 ']' 00:17:26.211 17:47:15 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 74681 00:17:26.211 17:47:15 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:26.211 17:47:15 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:26.211 17:47:15 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74681 00:17:26.211 17:47:15 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:26.211 17:47:15 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:26.211 17:47:15 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74681' 00:17:26.211 killing process with pid 74681 00:17:26.211 17:47:15 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 74681 00:17:26.211 17:47:15 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 74681 00:17:26.784 [2024-10-13 17:47:16.509891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.784 [2024-10-13 17:47:16.509942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:26.784 [2024-10-13 17:47:16.509953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:26.784 [2024-10-13 17:47:16.509961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.784 [2024-10-13 17:47:16.509980] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:26.784 [2024-10-13 17:47:16.512176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.784 [2024-10-13 17:47:16.512200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:26.784 [2024-10-13 17:47:16.512214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.181 ms 00:17:26.785 [2024-10-13 17:47:16.512221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.785 [2024-10-13 17:47:16.512456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.785 [2024-10-13 17:47:16.512469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:26.785 [2024-10-13 17:47:16.512477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:17:26.785 [2024-10-13 17:47:16.512483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.785 [2024-10-13 17:47:16.515790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.785 [2024-10-13 17:47:16.515813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:26.785 [2024-10-13 17:47:16.515823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.290 ms 00:17:26.785 [2024-10-13 17:47:16.515829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.785 [2024-10-13 17:47:16.521047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.785 [2024-10-13 17:47:16.521071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:26.785 [2024-10-13 17:47:16.521083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.188 ms 00:17:26.785 [2024-10-13 17:47:16.521090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.785 [2024-10-13 17:47:16.528710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.785 [2024-10-13 17:47:16.528733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:26.785 [2024-10-13 17:47:16.528744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.559 ms 00:17:26.785 [2024-10-13 17:47:16.528755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.785 [2024-10-13 17:47:16.535453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.785 [2024-10-13 17:47:16.535477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:26.785 [2024-10-13 17:47:16.535487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.665 ms 00:17:26.785 [2024-10-13 17:47:16.535496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.785 [2024-10-13 17:47:16.535608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.785 [2024-10-13 17:47:16.535617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:26.785 [2024-10-13 17:47:16.535626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:17:26.785 [2024-10-13 17:47:16.535632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.785 [2024-10-13 17:47:16.543584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.785 [2024-10-13 17:47:16.543606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:26.785 [2024-10-13 17:47:16.543615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.935 ms 00:17:26.785 [2024-10-13 17:47:16.543621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.785 [2024-10-13 17:47:16.550935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.785 [2024-10-13 17:47:16.550956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:26.785 [2024-10-13 17:47:16.550967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.284 ms 00:17:26.785 [2024-10-13 17:47:16.550972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.785 [2024-10-13 17:47:16.557955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.785 [2024-10-13 17:47:16.557975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:26.785 [2024-10-13 17:47:16.557984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.952 ms 00:17:26.785 [2024-10-13 17:47:16.557989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.785 [2024-10-13 17:47:16.564879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.785 [2024-10-13 17:47:16.564899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:26.785 [2024-10-13 17:47:16.564907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.840 ms 00:17:26.785 [2024-10-13 17:47:16.564913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.785 [2024-10-13 17:47:16.564940] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:26.785 [2024-10-13 17:47:16.564951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.564960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.564966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.564973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.564979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.564988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.564994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:26.785 [2024-10-13 17:47:16.565265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:26.786 [2024-10-13 17:47:16.565625] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:26.786 [2024-10-13 17:47:16.565636] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 74767a50-ba29-46ab-9b2a-5c33f6d9a290 00:17:26.786 [2024-10-13 17:47:16.565647] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:26.786 [2024-10-13 17:47:16.565656] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:26.786 [2024-10-13 17:47:16.565664] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:26.786 [2024-10-13 17:47:16.565672] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:26.786 [2024-10-13 17:47:16.565677] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:26.786 [2024-10-13 17:47:16.565685] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:26.786 [2024-10-13 17:47:16.565691] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:26.786 [2024-10-13 17:47:16.565697] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:26.786 [2024-10-13 17:47:16.565702] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:26.786 [2024-10-13 17:47:16.565708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.786 [2024-10-13 17:47:16.565714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:26.786 [2024-10-13 17:47:16.565723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.770 ms 00:17:26.786 [2024-10-13 17:47:16.565728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.786 [2024-10-13 17:47:16.575568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.786 [2024-10-13 17:47:16.575588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:26.786 [2024-10-13 17:47:16.575599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.825 ms 00:17:26.786 [2024-10-13 17:47:16.575605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.786 [2024-10-13 17:47:16.575915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.786 [2024-10-13 17:47:16.575922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:26.786 [2024-10-13 17:47:16.575931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:17:26.786 [2024-10-13 17:47:16.575936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.048 [2024-10-13 17:47:16.612710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.048 [2024-10-13 17:47:16.612732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:27.048 [2024-10-13 17:47:16.612743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.048 [2024-10-13 17:47:16.612749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.048 [2024-10-13 17:47:16.612832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.048 [2024-10-13 17:47:16.612840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:27.048 [2024-10-13 17:47:16.612848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.048 [2024-10-13 17:47:16.612854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.048 [2024-10-13 17:47:16.612889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.048 [2024-10-13 17:47:16.612896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:27.048 [2024-10-13 17:47:16.612906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.048 [2024-10-13 17:47:16.612912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.048 [2024-10-13 17:47:16.612926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.048 [2024-10-13 17:47:16.612932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:27.048 [2024-10-13 17:47:16.612940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.048 [2024-10-13 17:47:16.612945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.048 [2024-10-13 17:47:16.675149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.048 [2024-10-13 17:47:16.675179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:27.048 [2024-10-13 17:47:16.675190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.048 [2024-10-13 17:47:16.675197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.048 [2024-10-13 17:47:16.725687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.048 [2024-10-13 17:47:16.725716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:27.048 [2024-10-13 17:47:16.725727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.048 [2024-10-13 17:47:16.725734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.048 [2024-10-13 17:47:16.725806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.048 [2024-10-13 17:47:16.725816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:27.048 [2024-10-13 17:47:16.725827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.048 [2024-10-13 17:47:16.725833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.048 [2024-10-13 17:47:16.725859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.048 [2024-10-13 17:47:16.725866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:27.048 [2024-10-13 17:47:16.725874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.048 [2024-10-13 17:47:16.725880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.048 [2024-10-13 17:47:16.725958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.048 [2024-10-13 17:47:16.725966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:27.048 [2024-10-13 17:47:16.725976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.048 [2024-10-13 17:47:16.725982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.048 [2024-10-13 17:47:16.726010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.048 [2024-10-13 17:47:16.726018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:27.048 [2024-10-13 17:47:16.726025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.048 [2024-10-13 17:47:16.726031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.048 [2024-10-13 17:47:16.726069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.048 [2024-10-13 17:47:16.726076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:27.048 [2024-10-13 17:47:16.726087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.048 [2024-10-13 17:47:16.726094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.048 [2024-10-13 17:47:16.726134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.048 [2024-10-13 17:47:16.726142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:27.048 [2024-10-13 17:47:16.726150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.048 [2024-10-13 17:47:16.726157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.048 [2024-10-13 17:47:16.726280] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 216.368 ms, result 0 00:17:27.620 17:47:17 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:27.620 [2024-10-13 17:47:17.339084] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:17:27.620 [2024-10-13 17:47:17.339403] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74734 ] 00:17:27.913 [2024-10-13 17:47:17.489839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.913 [2024-10-13 17:47:17.604527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.173 [2024-10-13 17:47:17.833507] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:28.173 [2024-10-13 17:47:17.833569] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:28.433 [2024-10-13 17:47:17.987589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.433 [2024-10-13 17:47:17.987625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:28.433 [2024-10-13 17:47:17.987637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:28.433 [2024-10-13 17:47:17.987644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.433 [2024-10-13 17:47:17.989918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.433 [2024-10-13 17:47:17.989946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:28.433 [2024-10-13 17:47:17.989955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.261 ms 00:17:28.433 [2024-10-13 17:47:17.989961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.433 [2024-10-13 17:47:17.990024] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:28.433 [2024-10-13 17:47:17.990543] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:28.433 [2024-10-13 17:47:17.990571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.433 [2024-10-13 17:47:17.990578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:28.433 [2024-10-13 17:47:17.990586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:17:28.434 [2024-10-13 17:47:17.990592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.434 [2024-10-13 17:47:17.991903] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:28.434 [2024-10-13 17:47:18.002030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.434 [2024-10-13 17:47:18.002056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:28.434 [2024-10-13 17:47:18.002066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.127 ms 00:17:28.434 [2024-10-13 17:47:18.002076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.434 [2024-10-13 17:47:18.002147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.434 [2024-10-13 17:47:18.002156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:28.434 [2024-10-13 17:47:18.002163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:28.434 [2024-10-13 17:47:18.002169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.434 [2024-10-13 17:47:18.008353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.434 [2024-10-13 17:47:18.008378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:28.434 [2024-10-13 17:47:18.008386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.154 ms 00:17:28.434 [2024-10-13 17:47:18.008392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.434 [2024-10-13 17:47:18.008466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.434 [2024-10-13 17:47:18.008475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:28.434 [2024-10-13 17:47:18.008482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:28.434 [2024-10-13 17:47:18.008488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.434 [2024-10-13 17:47:18.008505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.434 [2024-10-13 17:47:18.008511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:28.434 [2024-10-13 17:47:18.008517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:28.434 [2024-10-13 17:47:18.008525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.434 [2024-10-13 17:47:18.008543] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:28.434 [2024-10-13 17:47:18.011507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.434 [2024-10-13 17:47:18.011528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:28.434 [2024-10-13 17:47:18.011536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.968 ms 00:17:28.434 [2024-10-13 17:47:18.011541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.434 [2024-10-13 17:47:18.011581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.434 [2024-10-13 17:47:18.011588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:28.434 [2024-10-13 17:47:18.011595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:28.434 [2024-10-13 17:47:18.011601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.434 [2024-10-13 17:47:18.011615] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:28.434 [2024-10-13 17:47:18.011633] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:28.434 [2024-10-13 17:47:18.011663] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:28.434 [2024-10-13 17:47:18.011676] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:28.434 [2024-10-13 17:47:18.011760] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:28.434 [2024-10-13 17:47:18.011768] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:28.434 [2024-10-13 17:47:18.011777] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:28.434 [2024-10-13 17:47:18.011785] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:28.434 [2024-10-13 17:47:18.011793] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:28.434 [2024-10-13 17:47:18.011799] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:28.434 [2024-10-13 17:47:18.011807] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:28.434 [2024-10-13 17:47:18.011813] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:28.434 [2024-10-13 17:47:18.011818] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:28.434 [2024-10-13 17:47:18.011825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.434 [2024-10-13 17:47:18.011830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:28.434 [2024-10-13 17:47:18.011836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:17:28.434 [2024-10-13 17:47:18.011843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.434 [2024-10-13 17:47:18.011911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.434 [2024-10-13 17:47:18.011917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:28.434 [2024-10-13 17:47:18.011924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:28.434 [2024-10-13 17:47:18.011931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.434 [2024-10-13 17:47:18.012005] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:28.434 [2024-10-13 17:47:18.012012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:28.434 [2024-10-13 17:47:18.012019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:28.434 [2024-10-13 17:47:18.012025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.434 [2024-10-13 17:47:18.012032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:28.434 [2024-10-13 17:47:18.012037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:28.434 [2024-10-13 17:47:18.012042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:28.434 [2024-10-13 17:47:18.012048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:28.434 [2024-10-13 17:47:18.012054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:28.434 [2024-10-13 17:47:18.012059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:28.434 [2024-10-13 17:47:18.012065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:28.434 [2024-10-13 17:47:18.012070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:28.434 [2024-10-13 17:47:18.012075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:28.434 [2024-10-13 17:47:18.012086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:28.434 [2024-10-13 17:47:18.012091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:28.434 [2024-10-13 17:47:18.012097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.434 [2024-10-13 17:47:18.012102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:28.434 [2024-10-13 17:47:18.012109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:28.434 [2024-10-13 17:47:18.012114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.434 [2024-10-13 17:47:18.012120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:28.434 [2024-10-13 17:47:18.012139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:28.434 [2024-10-13 17:47:18.012145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:28.434 [2024-10-13 17:47:18.012151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:28.434 [2024-10-13 17:47:18.012156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:28.434 [2024-10-13 17:47:18.012161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:28.434 [2024-10-13 17:47:18.012167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:28.434 [2024-10-13 17:47:18.012172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:28.434 [2024-10-13 17:47:18.012178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:28.434 [2024-10-13 17:47:18.012183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:28.434 [2024-10-13 17:47:18.012189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:28.434 [2024-10-13 17:47:18.012194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:28.434 [2024-10-13 17:47:18.012199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:28.434 [2024-10-13 17:47:18.012204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:28.434 [2024-10-13 17:47:18.012210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:28.434 [2024-10-13 17:47:18.012216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:28.434 [2024-10-13 17:47:18.012221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:28.434 [2024-10-13 17:47:18.012226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:28.434 [2024-10-13 17:47:18.012232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:28.434 [2024-10-13 17:47:18.012237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:28.434 [2024-10-13 17:47:18.012242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.434 [2024-10-13 17:47:18.012247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:28.434 [2024-10-13 17:47:18.012252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:28.434 [2024-10-13 17:47:18.012259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.434 [2024-10-13 17:47:18.012265] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:28.434 [2024-10-13 17:47:18.012271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:28.434 [2024-10-13 17:47:18.012277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:28.434 [2024-10-13 17:47:18.012284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.434 [2024-10-13 17:47:18.012291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:28.434 [2024-10-13 17:47:18.012297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:28.434 [2024-10-13 17:47:18.012303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:28.434 [2024-10-13 17:47:18.012309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:28.434 [2024-10-13 17:47:18.012314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:28.434 [2024-10-13 17:47:18.012319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:28.434 [2024-10-13 17:47:18.012326] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:28.434 [2024-10-13 17:47:18.012334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:28.435 [2024-10-13 17:47:18.012341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:28.435 [2024-10-13 17:47:18.012347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:28.435 [2024-10-13 17:47:18.012353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:28.435 [2024-10-13 17:47:18.012359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:28.435 [2024-10-13 17:47:18.012364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:28.435 [2024-10-13 17:47:18.012370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:28.435 [2024-10-13 17:47:18.012376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:28.435 [2024-10-13 17:47:18.012381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:28.435 [2024-10-13 17:47:18.012387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:28.435 [2024-10-13 17:47:18.012392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:28.435 [2024-10-13 17:47:18.012398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:28.435 [2024-10-13 17:47:18.012404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:28.435 [2024-10-13 17:47:18.012409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:28.435 [2024-10-13 17:47:18.012415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:28.435 [2024-10-13 17:47:18.012420] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:28.435 [2024-10-13 17:47:18.012427] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:28.435 [2024-10-13 17:47:18.012434] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:28.435 [2024-10-13 17:47:18.012440] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:28.435 [2024-10-13 17:47:18.012445] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:28.435 [2024-10-13 17:47:18.012451] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:28.435 [2024-10-13 17:47:18.012457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.435 [2024-10-13 17:47:18.012462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:28.435 [2024-10-13 17:47:18.012468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.504 ms 00:17:28.435 [2024-10-13 17:47:18.012476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.435 [2024-10-13 17:47:18.036684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.435 [2024-10-13 17:47:18.036708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:28.435 [2024-10-13 17:47:18.036717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.151 ms 00:17:28.435 [2024-10-13 17:47:18.036724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.435 [2024-10-13 17:47:18.036818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.435 [2024-10-13 17:47:18.036826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:28.435 [2024-10-13 17:47:18.036833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:28.435 [2024-10-13 17:47:18.036842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.435 [2024-10-13 17:47:18.084503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.435 [2024-10-13 17:47:18.084531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:28.435 [2024-10-13 17:47:18.084541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.644 ms 00:17:28.435 [2024-10-13 17:47:18.084548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.435 [2024-10-13 17:47:18.084638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.435 [2024-10-13 17:47:18.084647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:28.435 [2024-10-13 17:47:18.084655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:28.435 [2024-10-13 17:47:18.084662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.435 [2024-10-13 17:47:18.085055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.435 [2024-10-13 17:47:18.085074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:28.435 [2024-10-13 17:47:18.085082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:17:28.435 [2024-10-13 17:47:18.085089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.435 [2024-10-13 17:47:18.085212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.435 [2024-10-13 17:47:18.085222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:28.435 [2024-10-13 17:47:18.085230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:17:28.435 [2024-10-13 17:47:18.085236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.435 [2024-10-13 17:47:18.097464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.435 [2024-10-13 17:47:18.097488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:28.435 [2024-10-13 17:47:18.097496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.212 ms 00:17:28.435 [2024-10-13 17:47:18.097503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.435 [2024-10-13 17:47:18.107646] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:28.435 [2024-10-13 17:47:18.107672] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:28.435 [2024-10-13 17:47:18.107682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.435 [2024-10-13 17:47:18.107689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:28.435 [2024-10-13 17:47:18.107696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.087 ms 00:17:28.435 [2024-10-13 17:47:18.107703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.435 [2024-10-13 17:47:18.126322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.435 [2024-10-13 17:47:18.126352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:28.435 [2024-10-13 17:47:18.126361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.562 ms 00:17:28.435 [2024-10-13 17:47:18.126367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.435 [2024-10-13 17:47:18.135060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.435 [2024-10-13 17:47:18.135084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:28.435 [2024-10-13 17:47:18.135091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.640 ms 00:17:28.435 [2024-10-13 17:47:18.135096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.435 [2024-10-13 17:47:18.143811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.435 [2024-10-13 17:47:18.143833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:28.435 [2024-10-13 17:47:18.143841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.673 ms 00:17:28.435 [2024-10-13 17:47:18.143847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.435 [2024-10-13 17:47:18.144328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.435 [2024-10-13 17:47:18.144347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:28.435 [2024-10-13 17:47:18.144354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:17:28.435 [2024-10-13 17:47:18.144361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.435 [2024-10-13 17:47:18.190759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.435 [2024-10-13 17:47:18.190796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:28.435 [2024-10-13 17:47:18.190807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.380 ms 00:17:28.435 [2024-10-13 17:47:18.190814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.435 [2024-10-13 17:47:18.198936] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:28.435 [2024-10-13 17:47:18.213230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.435 [2024-10-13 17:47:18.213260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:28.435 [2024-10-13 17:47:18.213270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.340 ms 00:17:28.435 [2024-10-13 17:47:18.213278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.435 [2024-10-13 17:47:18.213354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.435 [2024-10-13 17:47:18.213363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:28.435 [2024-10-13 17:47:18.213370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:28.435 [2024-10-13 17:47:18.213377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.435 [2024-10-13 17:47:18.213422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.435 [2024-10-13 17:47:18.213434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:28.435 [2024-10-13 17:47:18.213441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:28.435 [2024-10-13 17:47:18.213447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.435 [2024-10-13 17:47:18.213471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.435 [2024-10-13 17:47:18.213480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:28.435 [2024-10-13 17:47:18.213486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:28.435 [2024-10-13 17:47:18.213493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.435 [2024-10-13 17:47:18.213520] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:28.435 [2024-10-13 17:47:18.213528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.435 [2024-10-13 17:47:18.213535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:28.435 [2024-10-13 17:47:18.213541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:28.435 [2024-10-13 17:47:18.213547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.435 [2024-10-13 17:47:18.231825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.435 [2024-10-13 17:47:18.231852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:28.435 [2024-10-13 17:47:18.231860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.243 ms 00:17:28.435 [2024-10-13 17:47:18.231866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.435 [2024-10-13 17:47:18.231942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.435 [2024-10-13 17:47:18.231951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:28.435 [2024-10-13 17:47:18.231958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:28.435 [2024-10-13 17:47:18.231964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.435 [2024-10-13 17:47:18.232766] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:28.436 [2024-10-13 17:47:18.234986] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 244.911 ms, result 0 00:17:28.436 [2024-10-13 17:47:18.235686] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:28.695 [2024-10-13 17:47:18.250480] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:29.636  [2024-10-13T17:47:20.391Z] Copying: 23/256 [MB] (23 MBps) [2024-10-13T17:47:21.331Z] Copying: 44/256 [MB] (20 MBps) [2024-10-13T17:47:22.715Z] Copying: 60/256 [MB] (16 MBps) [2024-10-13T17:47:23.656Z] Copying: 72/256 [MB] (11 MBps) [2024-10-13T17:47:24.599Z] Copying: 90/256 [MB] (17 MBps) [2024-10-13T17:47:25.541Z] Copying: 101/256 [MB] (11 MBps) [2024-10-13T17:47:26.481Z] Copying: 118/256 [MB] (17 MBps) [2024-10-13T17:47:27.422Z] Copying: 137/256 [MB] (19 MBps) [2024-10-13T17:47:28.362Z] Copying: 156/256 [MB] (18 MBps) [2024-10-13T17:47:29.305Z] Copying: 176/256 [MB] (20 MBps) [2024-10-13T17:47:30.688Z] Copying: 195/256 [MB] (18 MBps) [2024-10-13T17:47:31.631Z] Copying: 207/256 [MB] (11 MBps) [2024-10-13T17:47:32.574Z] Copying: 225/256 [MB] (17 MBps) [2024-10-13T17:47:33.145Z] Copying: 238/256 [MB] (13 MBps) [2024-10-13T17:47:33.406Z] Copying: 256/256 [MB] (average 17 MBps)[2024-10-13 17:47:33.403078] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:43.853 [2024-10-13 17:47:33.417369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.853 [2024-10-13 17:47:33.417430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:43.853 [2024-10-13 17:47:33.417450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:43.853 [2024-10-13 17:47:33.417461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.853 [2024-10-13 17:47:33.417505] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:43.853 [2024-10-13 17:47:33.420768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.853 [2024-10-13 17:47:33.420816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:43.853 [2024-10-13 17:47:33.420830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.243 ms 00:17:43.853 [2024-10-13 17:47:33.420841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.854 [2024-10-13 17:47:33.421165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.854 [2024-10-13 17:47:33.421179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:43.854 [2024-10-13 17:47:33.421190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:17:43.854 [2024-10-13 17:47:33.421199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.854 [2024-10-13 17:47:33.424957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.854 [2024-10-13 17:47:33.424984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:43.854 [2024-10-13 17:47:33.425002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.740 ms 00:17:43.854 [2024-10-13 17:47:33.425011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.854 [2024-10-13 17:47:33.431967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.854 [2024-10-13 17:47:33.432011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:43.854 [2024-10-13 17:47:33.432025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.933 ms 00:17:43.854 [2024-10-13 17:47:33.432034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.854 [2024-10-13 17:47:33.459279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.854 [2024-10-13 17:47:33.459336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:43.854 [2024-10-13 17:47:33.459350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.140 ms 00:17:43.854 [2024-10-13 17:47:33.459359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.854 [2024-10-13 17:47:33.477577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.854 [2024-10-13 17:47:33.477631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:43.854 [2024-10-13 17:47:33.477653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.139 ms 00:17:43.854 [2024-10-13 17:47:33.477665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.854 [2024-10-13 17:47:33.477844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.854 [2024-10-13 17:47:33.477857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:43.854 [2024-10-13 17:47:33.477868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:17:43.854 [2024-10-13 17:47:33.477877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.854 [2024-10-13 17:47:33.505369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.854 [2024-10-13 17:47:33.505418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:43.854 [2024-10-13 17:47:33.505431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.459 ms 00:17:43.854 [2024-10-13 17:47:33.505440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.854 [2024-10-13 17:47:33.532011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.854 [2024-10-13 17:47:33.532063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:43.854 [2024-10-13 17:47:33.532077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.478 ms 00:17:43.854 [2024-10-13 17:47:33.532085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.854 [2024-10-13 17:47:33.558271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.854 [2024-10-13 17:47:33.558321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:43.854 [2024-10-13 17:47:33.558334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.097 ms 00:17:43.854 [2024-10-13 17:47:33.558342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.854 [2024-10-13 17:47:33.584233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.854 [2024-10-13 17:47:33.584283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:43.854 [2024-10-13 17:47:33.584295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.786 ms 00:17:43.854 [2024-10-13 17:47:33.584303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.854 [2024-10-13 17:47:33.584372] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:43.854 [2024-10-13 17:47:33.584398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:43.854 [2024-10-13 17:47:33.584919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.584927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.584938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.584946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.584955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.584963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.584970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.584978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.584986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.584996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.585004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.585013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.585021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.585029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.585037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.585045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.585053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.585062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.585071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.585079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.585088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.585096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.585105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.585113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.585122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.585133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.585141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.585150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.585157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.585166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.585173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.585182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.585191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.585199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.585211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.585229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.585237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.585249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.585258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.585265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:43.855 [2024-10-13 17:47:33.585283] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:43.855 [2024-10-13 17:47:33.585292] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 74767a50-ba29-46ab-9b2a-5c33f6d9a290 00:17:43.855 [2024-10-13 17:47:33.585301] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:43.855 [2024-10-13 17:47:33.585310] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:43.855 [2024-10-13 17:47:33.585318] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:43.855 [2024-10-13 17:47:33.585328] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:43.855 [2024-10-13 17:47:33.585337] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:43.855 [2024-10-13 17:47:33.585345] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:43.855 [2024-10-13 17:47:33.585352] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:43.855 [2024-10-13 17:47:33.585359] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:43.855 [2024-10-13 17:47:33.585365] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:43.855 [2024-10-13 17:47:33.585373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.855 [2024-10-13 17:47:33.585381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:43.855 [2024-10-13 17:47:33.585394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.004 ms 00:17:43.855 [2024-10-13 17:47:33.585403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.855 [2024-10-13 17:47:33.600533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.855 [2024-10-13 17:47:33.600598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:43.855 [2024-10-13 17:47:33.600611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.110 ms 00:17:43.855 [2024-10-13 17:47:33.600621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.855 [2024-10-13 17:47:33.601071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.855 [2024-10-13 17:47:33.601098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:43.855 [2024-10-13 17:47:33.601109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:17:43.855 [2024-10-13 17:47:33.601118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.855 [2024-10-13 17:47:33.643737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.855 [2024-10-13 17:47:33.643805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:43.855 [2024-10-13 17:47:33.643818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.855 [2024-10-13 17:47:33.643829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.855 [2024-10-13 17:47:33.643953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.855 [2024-10-13 17:47:33.643968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:43.855 [2024-10-13 17:47:33.643978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.855 [2024-10-13 17:47:33.643986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.855 [2024-10-13 17:47:33.644050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.855 [2024-10-13 17:47:33.644061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:43.855 [2024-10-13 17:47:33.644070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.855 [2024-10-13 17:47:33.644080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.855 [2024-10-13 17:47:33.644100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.855 [2024-10-13 17:47:33.644126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:43.855 [2024-10-13 17:47:33.644138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.855 [2024-10-13 17:47:33.644146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.116 [2024-10-13 17:47:33.737719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.116 [2024-10-13 17:47:33.737782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:44.116 [2024-10-13 17:47:33.737797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.116 [2024-10-13 17:47:33.737807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.116 [2024-10-13 17:47:33.814095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.116 [2024-10-13 17:47:33.814167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:44.116 [2024-10-13 17:47:33.814182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.116 [2024-10-13 17:47:33.814192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.116 [2024-10-13 17:47:33.814289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.116 [2024-10-13 17:47:33.814301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:44.116 [2024-10-13 17:47:33.814311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.116 [2024-10-13 17:47:33.814321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.116 [2024-10-13 17:47:33.814360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.116 [2024-10-13 17:47:33.814371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:44.116 [2024-10-13 17:47:33.814381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.116 [2024-10-13 17:47:33.814394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.116 [2024-10-13 17:47:33.814514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.116 [2024-10-13 17:47:33.814525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:44.116 [2024-10-13 17:47:33.814535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.116 [2024-10-13 17:47:33.814544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.116 [2024-10-13 17:47:33.814614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.116 [2024-10-13 17:47:33.814626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:44.116 [2024-10-13 17:47:33.814636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.116 [2024-10-13 17:47:33.814645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.116 [2024-10-13 17:47:33.814709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.116 [2024-10-13 17:47:33.814721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:44.116 [2024-10-13 17:47:33.814731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.116 [2024-10-13 17:47:33.814739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.116 [2024-10-13 17:47:33.814799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.116 [2024-10-13 17:47:33.814810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:44.116 [2024-10-13 17:47:33.814820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.116 [2024-10-13 17:47:33.814834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.116 [2024-10-13 17:47:33.815026] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 397.657 ms, result 0 00:17:45.084 00:17:45.084 00:17:45.084 17:47:34 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:45.654 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:45.654 17:47:35 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:45.654 17:47:35 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:17:45.654 17:47:35 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:45.654 17:47:35 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:45.654 17:47:35 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:45.654 17:47:35 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:45.654 17:47:35 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 74681 00:17:45.654 17:47:35 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 74681 ']' 00:17:45.654 Process with pid 74681 is not found 00:17:45.654 17:47:35 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 74681 00:17:45.654 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (74681) - No such process 00:17:45.654 17:47:35 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 74681 is not found' 00:17:45.654 00:17:45.654 real 1m14.784s 00:17:45.654 user 1m41.427s 00:17:45.654 sys 0m5.943s 00:17:45.654 17:47:35 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:45.654 17:47:35 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:45.654 ************************************ 00:17:45.654 END TEST ftl_trim 00:17:45.654 ************************************ 00:17:45.654 17:47:35 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:45.654 17:47:35 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:45.654 17:47:35 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:45.654 17:47:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:45.654 ************************************ 00:17:45.654 START TEST ftl_restore 00:17:45.654 ************************************ 00:17:45.654 17:47:35 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:45.914 * Looking for test storage... 00:17:45.914 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:45.914 17:47:35 ftl.ftl_restore -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:45.914 17:47:35 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lcov --version 00:17:45.914 17:47:35 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:45.914 17:47:35 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:45.914 17:47:35 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:45.914 17:47:35 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:45.914 17:47:35 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:45.914 17:47:35 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:17:45.914 17:47:35 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:17:45.914 17:47:35 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:17:45.914 17:47:35 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:17:45.914 17:47:35 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:17:45.914 17:47:35 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:17:45.914 17:47:35 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:17:45.914 17:47:35 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:45.914 17:47:35 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:17:45.914 17:47:35 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:17:45.914 17:47:35 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:45.914 17:47:35 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:45.914 17:47:35 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:17:45.914 17:47:35 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:17:45.914 17:47:35 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:45.914 17:47:35 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:17:45.914 17:47:35 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:17:45.914 17:47:35 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:17:45.914 17:47:35 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:17:45.914 17:47:35 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:45.914 17:47:35 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:17:45.914 17:47:35 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:17:45.914 17:47:35 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:45.914 17:47:35 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:45.914 17:47:35 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:17:45.914 17:47:35 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:45.914 17:47:35 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:45.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.914 --rc genhtml_branch_coverage=1 00:17:45.914 --rc genhtml_function_coverage=1 00:17:45.914 --rc genhtml_legend=1 00:17:45.914 --rc geninfo_all_blocks=1 00:17:45.914 --rc geninfo_unexecuted_blocks=1 00:17:45.914 00:17:45.914 ' 00:17:45.914 17:47:35 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:45.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.914 --rc genhtml_branch_coverage=1 00:17:45.914 --rc genhtml_function_coverage=1 00:17:45.914 --rc genhtml_legend=1 00:17:45.914 --rc geninfo_all_blocks=1 00:17:45.914 --rc geninfo_unexecuted_blocks=1 00:17:45.914 00:17:45.914 ' 00:17:45.914 17:47:35 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:45.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.914 --rc genhtml_branch_coverage=1 00:17:45.914 --rc genhtml_function_coverage=1 00:17:45.914 --rc genhtml_legend=1 00:17:45.914 --rc geninfo_all_blocks=1 00:17:45.914 --rc geninfo_unexecuted_blocks=1 00:17:45.914 00:17:45.914 ' 00:17:45.914 17:47:35 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:45.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.914 --rc genhtml_branch_coverage=1 00:17:45.914 --rc genhtml_function_coverage=1 00:17:45.914 --rc genhtml_legend=1 00:17:45.914 --rc geninfo_all_blocks=1 00:17:45.915 --rc geninfo_unexecuted_blocks=1 00:17:45.915 00:17:45.915 ' 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.EIheL9yw0y 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=74997 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:45.915 17:47:35 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 74997 00:17:45.915 17:47:35 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 74997 ']' 00:17:45.915 17:47:35 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.915 17:47:35 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:45.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.915 17:47:35 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.915 17:47:35 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:45.915 17:47:35 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:17:45.915 [2024-10-13 17:47:35.697080] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:17:45.915 [2024-10-13 17:47:35.697317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74997 ] 00:17:46.175 [2024-10-13 17:47:35.856890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.434 [2024-10-13 17:47:36.006439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.004 17:47:36 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:47.004 17:47:36 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:17:47.004 17:47:36 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:47.004 17:47:36 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:17:47.004 17:47:36 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:47.004 17:47:36 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:17:47.004 17:47:36 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:17:47.004 17:47:36 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:47.575 17:47:37 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:47.575 17:47:37 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:17:47.575 17:47:37 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:47.575 17:47:37 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:17:47.575 17:47:37 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:47.575 17:47:37 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:47.575 17:47:37 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:47.575 17:47:37 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:47.575 17:47:37 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:47.575 { 00:17:47.575 "name": "nvme0n1", 00:17:47.575 "aliases": [ 00:17:47.575 "17ee773a-5aa8-49d2-8fae-491f1fd56403" 00:17:47.575 ], 00:17:47.575 "product_name": "NVMe disk", 00:17:47.575 "block_size": 4096, 00:17:47.575 "num_blocks": 1310720, 00:17:47.575 "uuid": "17ee773a-5aa8-49d2-8fae-491f1fd56403", 00:17:47.575 "numa_id": -1, 00:17:47.575 "assigned_rate_limits": { 00:17:47.575 "rw_ios_per_sec": 0, 00:17:47.575 "rw_mbytes_per_sec": 0, 00:17:47.575 "r_mbytes_per_sec": 0, 00:17:47.575 "w_mbytes_per_sec": 0 00:17:47.575 }, 00:17:47.575 "claimed": true, 00:17:47.575 "claim_type": "read_many_write_one", 00:17:47.575 "zoned": false, 00:17:47.575 "supported_io_types": { 00:17:47.575 "read": true, 00:17:47.575 "write": true, 00:17:47.575 "unmap": true, 00:17:47.575 "flush": true, 00:17:47.575 "reset": true, 00:17:47.575 "nvme_admin": true, 00:17:47.575 "nvme_io": true, 00:17:47.575 "nvme_io_md": false, 00:17:47.575 "write_zeroes": true, 00:17:47.575 "zcopy": false, 00:17:47.575 "get_zone_info": false, 00:17:47.575 "zone_management": false, 00:17:47.575 "zone_append": false, 00:17:47.575 "compare": true, 00:17:47.575 "compare_and_write": false, 00:17:47.575 "abort": true, 00:17:47.575 "seek_hole": false, 00:17:47.575 "seek_data": false, 00:17:47.575 "copy": true, 00:17:47.575 "nvme_iov_md": false 00:17:47.575 }, 00:17:47.575 "driver_specific": { 00:17:47.575 "nvme": [ 00:17:47.575 { 00:17:47.575 "pci_address": "0000:00:11.0", 00:17:47.575 "trid": { 00:17:47.575 "trtype": "PCIe", 00:17:47.575 "traddr": "0000:00:11.0" 00:17:47.575 }, 00:17:47.575 "ctrlr_data": { 00:17:47.575 "cntlid": 0, 00:17:47.575 "vendor_id": "0x1b36", 00:17:47.575 "model_number": "QEMU NVMe Ctrl", 00:17:47.575 "serial_number": "12341", 00:17:47.575 "firmware_revision": "8.0.0", 00:17:47.575 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:47.575 "oacs": { 00:17:47.575 "security": 0, 00:17:47.575 "format": 1, 00:17:47.575 "firmware": 0, 00:17:47.575 "ns_manage": 1 00:17:47.575 }, 00:17:47.575 "multi_ctrlr": false, 00:17:47.575 "ana_reporting": false 00:17:47.575 }, 00:17:47.575 "vs": { 00:17:47.575 "nvme_version": "1.4" 00:17:47.575 }, 00:17:47.575 "ns_data": { 00:17:47.575 "id": 1, 00:17:47.575 "can_share": false 00:17:47.575 } 00:17:47.575 } 00:17:47.575 ], 00:17:47.575 "mp_policy": "active_passive" 00:17:47.575 } 00:17:47.575 } 00:17:47.575 ]' 00:17:47.575 17:47:37 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:47.575 17:47:37 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:47.575 17:47:37 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:47.836 17:47:37 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:17:47.836 17:47:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:17:47.836 17:47:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:17:47.836 17:47:37 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:17:47.836 17:47:37 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:47.836 17:47:37 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:17:47.836 17:47:37 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:47.836 17:47:37 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:47.836 17:47:37 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=82345f5c-b25b-4818-a815-1fc06212f051 00:17:47.836 17:47:37 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:17:47.836 17:47:37 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 82345f5c-b25b-4818-a815-1fc06212f051 00:17:48.096 17:47:37 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:48.355 17:47:38 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=bcd51b23-6873-4ccf-afa8-01e6f88fbf9e 00:17:48.355 17:47:38 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u bcd51b23-6873-4ccf-afa8-01e6f88fbf9e 00:17:48.615 17:47:38 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=f0a2f145-cc54-4e29-ad6b-5c35e3ac5dbd 00:17:48.615 17:47:38 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:17:48.615 17:47:38 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 f0a2f145-cc54-4e29-ad6b-5c35e3ac5dbd 00:17:48.615 17:47:38 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:17:48.615 17:47:38 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:48.615 17:47:38 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=f0a2f145-cc54-4e29-ad6b-5c35e3ac5dbd 00:17:48.615 17:47:38 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:17:48.615 17:47:38 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size f0a2f145-cc54-4e29-ad6b-5c35e3ac5dbd 00:17:48.615 17:47:38 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=f0a2f145-cc54-4e29-ad6b-5c35e3ac5dbd 00:17:48.615 17:47:38 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:48.615 17:47:38 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:48.615 17:47:38 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:48.615 17:47:38 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f0a2f145-cc54-4e29-ad6b-5c35e3ac5dbd 00:17:48.874 17:47:38 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:48.874 { 00:17:48.874 "name": "f0a2f145-cc54-4e29-ad6b-5c35e3ac5dbd", 00:17:48.874 "aliases": [ 00:17:48.874 "lvs/nvme0n1p0" 00:17:48.874 ], 00:17:48.874 "product_name": "Logical Volume", 00:17:48.874 "block_size": 4096, 00:17:48.874 "num_blocks": 26476544, 00:17:48.874 "uuid": "f0a2f145-cc54-4e29-ad6b-5c35e3ac5dbd", 00:17:48.874 "assigned_rate_limits": { 00:17:48.874 "rw_ios_per_sec": 0, 00:17:48.874 "rw_mbytes_per_sec": 0, 00:17:48.874 "r_mbytes_per_sec": 0, 00:17:48.874 "w_mbytes_per_sec": 0 00:17:48.874 }, 00:17:48.874 "claimed": false, 00:17:48.874 "zoned": false, 00:17:48.874 "supported_io_types": { 00:17:48.874 "read": true, 00:17:48.874 "write": true, 00:17:48.874 "unmap": true, 00:17:48.874 "flush": false, 00:17:48.874 "reset": true, 00:17:48.874 "nvme_admin": false, 00:17:48.874 "nvme_io": false, 00:17:48.874 "nvme_io_md": false, 00:17:48.874 "write_zeroes": true, 00:17:48.874 "zcopy": false, 00:17:48.874 "get_zone_info": false, 00:17:48.874 "zone_management": false, 00:17:48.874 "zone_append": false, 00:17:48.874 "compare": false, 00:17:48.874 "compare_and_write": false, 00:17:48.874 "abort": false, 00:17:48.874 "seek_hole": true, 00:17:48.874 "seek_data": true, 00:17:48.874 "copy": false, 00:17:48.874 "nvme_iov_md": false 00:17:48.874 }, 00:17:48.874 "driver_specific": { 00:17:48.874 "lvol": { 00:17:48.874 "lvol_store_uuid": "bcd51b23-6873-4ccf-afa8-01e6f88fbf9e", 00:17:48.874 "base_bdev": "nvme0n1", 00:17:48.874 "thin_provision": true, 00:17:48.874 "num_allocated_clusters": 0, 00:17:48.874 "snapshot": false, 00:17:48.874 "clone": false, 00:17:48.874 "esnap_clone": false 00:17:48.874 } 00:17:48.874 } 00:17:48.874 } 00:17:48.874 ]' 00:17:48.874 17:47:38 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:48.874 17:47:38 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:48.874 17:47:38 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:48.874 17:47:38 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:48.874 17:47:38 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:48.874 17:47:38 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:17:48.874 17:47:38 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:17:48.874 17:47:38 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:17:48.874 17:47:38 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:49.134 17:47:38 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:49.134 17:47:38 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:49.134 17:47:38 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size f0a2f145-cc54-4e29-ad6b-5c35e3ac5dbd 00:17:49.134 17:47:38 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=f0a2f145-cc54-4e29-ad6b-5c35e3ac5dbd 00:17:49.134 17:47:38 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:49.134 17:47:38 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:49.134 17:47:38 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:49.134 17:47:38 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f0a2f145-cc54-4e29-ad6b-5c35e3ac5dbd 00:17:49.393 17:47:39 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:49.393 { 00:17:49.393 "name": "f0a2f145-cc54-4e29-ad6b-5c35e3ac5dbd", 00:17:49.393 "aliases": [ 00:17:49.393 "lvs/nvme0n1p0" 00:17:49.393 ], 00:17:49.393 "product_name": "Logical Volume", 00:17:49.393 "block_size": 4096, 00:17:49.393 "num_blocks": 26476544, 00:17:49.393 "uuid": "f0a2f145-cc54-4e29-ad6b-5c35e3ac5dbd", 00:17:49.393 "assigned_rate_limits": { 00:17:49.393 "rw_ios_per_sec": 0, 00:17:49.393 "rw_mbytes_per_sec": 0, 00:17:49.393 "r_mbytes_per_sec": 0, 00:17:49.393 "w_mbytes_per_sec": 0 00:17:49.393 }, 00:17:49.393 "claimed": false, 00:17:49.393 "zoned": false, 00:17:49.393 "supported_io_types": { 00:17:49.393 "read": true, 00:17:49.393 "write": true, 00:17:49.393 "unmap": true, 00:17:49.393 "flush": false, 00:17:49.393 "reset": true, 00:17:49.393 "nvme_admin": false, 00:17:49.393 "nvme_io": false, 00:17:49.393 "nvme_io_md": false, 00:17:49.393 "write_zeroes": true, 00:17:49.393 "zcopy": false, 00:17:49.393 "get_zone_info": false, 00:17:49.393 "zone_management": false, 00:17:49.393 "zone_append": false, 00:17:49.393 "compare": false, 00:17:49.393 "compare_and_write": false, 00:17:49.393 "abort": false, 00:17:49.393 "seek_hole": true, 00:17:49.393 "seek_data": true, 00:17:49.393 "copy": false, 00:17:49.393 "nvme_iov_md": false 00:17:49.393 }, 00:17:49.393 "driver_specific": { 00:17:49.393 "lvol": { 00:17:49.393 "lvol_store_uuid": "bcd51b23-6873-4ccf-afa8-01e6f88fbf9e", 00:17:49.393 "base_bdev": "nvme0n1", 00:17:49.393 "thin_provision": true, 00:17:49.393 "num_allocated_clusters": 0, 00:17:49.393 "snapshot": false, 00:17:49.393 "clone": false, 00:17:49.393 "esnap_clone": false 00:17:49.393 } 00:17:49.393 } 00:17:49.393 } 00:17:49.393 ]' 00:17:49.393 17:47:39 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:49.393 17:47:39 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:49.393 17:47:39 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:49.393 17:47:39 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:49.394 17:47:39 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:49.394 17:47:39 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:17:49.394 17:47:39 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:17:49.394 17:47:39 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:49.653 17:47:39 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:17:49.653 17:47:39 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size f0a2f145-cc54-4e29-ad6b-5c35e3ac5dbd 00:17:49.653 17:47:39 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=f0a2f145-cc54-4e29-ad6b-5c35e3ac5dbd 00:17:49.653 17:47:39 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:49.653 17:47:39 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:49.653 17:47:39 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:49.653 17:47:39 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f0a2f145-cc54-4e29-ad6b-5c35e3ac5dbd 00:17:49.913 17:47:39 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:49.913 { 00:17:49.913 "name": "f0a2f145-cc54-4e29-ad6b-5c35e3ac5dbd", 00:17:49.913 "aliases": [ 00:17:49.913 "lvs/nvme0n1p0" 00:17:49.913 ], 00:17:49.913 "product_name": "Logical Volume", 00:17:49.913 "block_size": 4096, 00:17:49.913 "num_blocks": 26476544, 00:17:49.913 "uuid": "f0a2f145-cc54-4e29-ad6b-5c35e3ac5dbd", 00:17:49.913 "assigned_rate_limits": { 00:17:49.913 "rw_ios_per_sec": 0, 00:17:49.913 "rw_mbytes_per_sec": 0, 00:17:49.913 "r_mbytes_per_sec": 0, 00:17:49.913 "w_mbytes_per_sec": 0 00:17:49.913 }, 00:17:49.913 "claimed": false, 00:17:49.913 "zoned": false, 00:17:49.913 "supported_io_types": { 00:17:49.913 "read": true, 00:17:49.913 "write": true, 00:17:49.913 "unmap": true, 00:17:49.913 "flush": false, 00:17:49.913 "reset": true, 00:17:49.913 "nvme_admin": false, 00:17:49.913 "nvme_io": false, 00:17:49.913 "nvme_io_md": false, 00:17:49.913 "write_zeroes": true, 00:17:49.913 "zcopy": false, 00:17:49.913 "get_zone_info": false, 00:17:49.913 "zone_management": false, 00:17:49.913 "zone_append": false, 00:17:49.913 "compare": false, 00:17:49.913 "compare_and_write": false, 00:17:49.913 "abort": false, 00:17:49.913 "seek_hole": true, 00:17:49.913 "seek_data": true, 00:17:49.913 "copy": false, 00:17:49.913 "nvme_iov_md": false 00:17:49.913 }, 00:17:49.913 "driver_specific": { 00:17:49.913 "lvol": { 00:17:49.913 "lvol_store_uuid": "bcd51b23-6873-4ccf-afa8-01e6f88fbf9e", 00:17:49.913 "base_bdev": "nvme0n1", 00:17:49.913 "thin_provision": true, 00:17:49.913 "num_allocated_clusters": 0, 00:17:49.913 "snapshot": false, 00:17:49.913 "clone": false, 00:17:49.913 "esnap_clone": false 00:17:49.913 } 00:17:49.913 } 00:17:49.913 } 00:17:49.913 ]' 00:17:49.913 17:47:39 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:49.913 17:47:39 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:49.913 17:47:39 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:49.913 17:47:39 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:49.913 17:47:39 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:49.913 17:47:39 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:17:49.913 17:47:39 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:17:49.913 17:47:39 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d f0a2f145-cc54-4e29-ad6b-5c35e3ac5dbd --l2p_dram_limit 10' 00:17:49.913 17:47:39 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:17:49.913 17:47:39 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:49.913 17:47:39 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:17:49.913 17:47:39 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:17:49.913 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:17:49.913 17:47:39 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f0a2f145-cc54-4e29-ad6b-5c35e3ac5dbd --l2p_dram_limit 10 -c nvc0n1p0 00:17:50.174 [2024-10-13 17:47:39.786336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.174 [2024-10-13 17:47:39.786382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:50.174 [2024-10-13 17:47:39.786397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:50.174 [2024-10-13 17:47:39.786404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.174 [2024-10-13 17:47:39.786462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.174 [2024-10-13 17:47:39.786472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:50.174 [2024-10-13 17:47:39.786481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:17:50.174 [2024-10-13 17:47:39.786487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.174 [2024-10-13 17:47:39.786508] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:50.174 [2024-10-13 17:47:39.787119] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:50.174 [2024-10-13 17:47:39.787142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.174 [2024-10-13 17:47:39.787148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:50.174 [2024-10-13 17:47:39.787157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.639 ms 00:17:50.174 [2024-10-13 17:47:39.787163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.174 [2024-10-13 17:47:39.787222] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 90a54ddc-e2a7-4828-bef8-d66a300cc8d9 00:17:50.174 [2024-10-13 17:47:39.788577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.174 [2024-10-13 17:47:39.788612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:50.174 [2024-10-13 17:47:39.788621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:50.174 [2024-10-13 17:47:39.788632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.174 [2024-10-13 17:47:39.795590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.174 [2024-10-13 17:47:39.795620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:50.174 [2024-10-13 17:47:39.795629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.916 ms 00:17:50.174 [2024-10-13 17:47:39.795637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.174 [2024-10-13 17:47:39.795712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.174 [2024-10-13 17:47:39.795722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:50.174 [2024-10-13 17:47:39.795729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:17:50.174 [2024-10-13 17:47:39.795739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.174 [2024-10-13 17:47:39.795780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.174 [2024-10-13 17:47:39.795790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:50.174 [2024-10-13 17:47:39.795796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:50.174 [2024-10-13 17:47:39.795804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.174 [2024-10-13 17:47:39.795823] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:50.174 [2024-10-13 17:47:39.799133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.174 [2024-10-13 17:47:39.799162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:50.175 [2024-10-13 17:47:39.799171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.315 ms 00:17:50.175 [2024-10-13 17:47:39.799181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.175 [2024-10-13 17:47:39.799211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.175 [2024-10-13 17:47:39.799217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:50.175 [2024-10-13 17:47:39.799225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:50.175 [2024-10-13 17:47:39.799231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.175 [2024-10-13 17:47:39.799252] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:50.175 [2024-10-13 17:47:39.799366] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:50.175 [2024-10-13 17:47:39.799379] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:50.175 [2024-10-13 17:47:39.799388] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:50.175 [2024-10-13 17:47:39.799398] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:50.175 [2024-10-13 17:47:39.799405] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:50.175 [2024-10-13 17:47:39.799413] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:50.175 [2024-10-13 17:47:39.799419] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:50.175 [2024-10-13 17:47:39.799426] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:50.175 [2024-10-13 17:47:39.799432] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:50.175 [2024-10-13 17:47:39.799440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.175 [2024-10-13 17:47:39.799447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:50.175 [2024-10-13 17:47:39.799454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:17:50.175 [2024-10-13 17:47:39.799465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.175 [2024-10-13 17:47:39.799532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.175 [2024-10-13 17:47:39.799539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:50.175 [2024-10-13 17:47:39.799546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:50.175 [2024-10-13 17:47:39.799553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.175 [2024-10-13 17:47:39.799641] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:50.175 [2024-10-13 17:47:39.799649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:50.175 [2024-10-13 17:47:39.799658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:50.175 [2024-10-13 17:47:39.799665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.175 [2024-10-13 17:47:39.799672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:50.175 [2024-10-13 17:47:39.799677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:50.175 [2024-10-13 17:47:39.799684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:50.175 [2024-10-13 17:47:39.799689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:50.175 [2024-10-13 17:47:39.799695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:50.175 [2024-10-13 17:47:39.799700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:50.175 [2024-10-13 17:47:39.799707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:50.175 [2024-10-13 17:47:39.799712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:50.175 [2024-10-13 17:47:39.799718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:50.175 [2024-10-13 17:47:39.799723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:50.175 [2024-10-13 17:47:39.799730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:50.175 [2024-10-13 17:47:39.799735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.175 [2024-10-13 17:47:39.799744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:50.175 [2024-10-13 17:47:39.799750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:50.175 [2024-10-13 17:47:39.799756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.175 [2024-10-13 17:47:39.799761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:50.175 [2024-10-13 17:47:39.799769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:50.175 [2024-10-13 17:47:39.799775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:50.175 [2024-10-13 17:47:39.799782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:50.175 [2024-10-13 17:47:39.799787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:50.175 [2024-10-13 17:47:39.799793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:50.175 [2024-10-13 17:47:39.799798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:50.175 [2024-10-13 17:47:39.799805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:50.175 [2024-10-13 17:47:39.799810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:50.175 [2024-10-13 17:47:39.799817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:50.175 [2024-10-13 17:47:39.799822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:50.175 [2024-10-13 17:47:39.799829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:50.175 [2024-10-13 17:47:39.799833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:50.175 [2024-10-13 17:47:39.799842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:50.175 [2024-10-13 17:47:39.799847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:50.175 [2024-10-13 17:47:39.799854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:50.175 [2024-10-13 17:47:39.799859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:50.175 [2024-10-13 17:47:39.799865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:50.175 [2024-10-13 17:47:39.799869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:50.175 [2024-10-13 17:47:39.799876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:50.175 [2024-10-13 17:47:39.799881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.175 [2024-10-13 17:47:39.799888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:50.175 [2024-10-13 17:47:39.799893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:50.175 [2024-10-13 17:47:39.799899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.175 [2024-10-13 17:47:39.799904] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:50.175 [2024-10-13 17:47:39.799911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:50.175 [2024-10-13 17:47:39.799917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:50.175 [2024-10-13 17:47:39.799924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.175 [2024-10-13 17:47:39.799930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:50.175 [2024-10-13 17:47:39.799939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:50.175 [2024-10-13 17:47:39.799944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:50.175 [2024-10-13 17:47:39.799950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:50.175 [2024-10-13 17:47:39.799955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:50.175 [2024-10-13 17:47:39.799961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:50.175 [2024-10-13 17:47:39.799973] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:50.175 [2024-10-13 17:47:39.799982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:50.175 [2024-10-13 17:47:39.799988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:50.175 [2024-10-13 17:47:39.799996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:50.175 [2024-10-13 17:47:39.800002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:50.175 [2024-10-13 17:47:39.800009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:50.175 [2024-10-13 17:47:39.800014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:50.175 [2024-10-13 17:47:39.800022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:50.175 [2024-10-13 17:47:39.800027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:50.175 [2024-10-13 17:47:39.800034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:50.175 [2024-10-13 17:47:39.800040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:50.175 [2024-10-13 17:47:39.800049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:50.175 [2024-10-13 17:47:39.800055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:50.175 [2024-10-13 17:47:39.800062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:50.175 [2024-10-13 17:47:39.800068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:50.175 [2024-10-13 17:47:39.800075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:50.175 [2024-10-13 17:47:39.800081] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:50.175 [2024-10-13 17:47:39.800089] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:50.175 [2024-10-13 17:47:39.800099] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:50.175 [2024-10-13 17:47:39.800115] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:50.175 [2024-10-13 17:47:39.800120] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:50.175 [2024-10-13 17:47:39.800127] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:50.175 [2024-10-13 17:47:39.800133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.175 [2024-10-13 17:47:39.800140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:50.175 [2024-10-13 17:47:39.800146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:17:50.175 [2024-10-13 17:47:39.800153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.175 [2024-10-13 17:47:39.800195] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:50.176 [2024-10-13 17:47:39.800207] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:54.384 [2024-10-13 17:47:43.855372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.384 [2024-10-13 17:47:43.855479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:54.384 [2024-10-13 17:47:43.855500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4055.161 ms 00:17:54.384 [2024-10-13 17:47:43.855512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.384 [2024-10-13 17:47:43.893806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.384 [2024-10-13 17:47:43.893889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:54.384 [2024-10-13 17:47:43.893906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.018 ms 00:17:54.384 [2024-10-13 17:47:43.893919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.384 [2024-10-13 17:47:43.894081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.384 [2024-10-13 17:47:43.894097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:54.384 [2024-10-13 17:47:43.894107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:17:54.384 [2024-10-13 17:47:43.894122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.384 [2024-10-13 17:47:43.934400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.384 [2024-10-13 17:47:43.934467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:54.384 [2024-10-13 17:47:43.934482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.225 ms 00:17:54.384 [2024-10-13 17:47:43.934494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.384 [2024-10-13 17:47:43.934534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.384 [2024-10-13 17:47:43.934548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:54.384 [2024-10-13 17:47:43.934571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:54.384 [2024-10-13 17:47:43.934587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.384 [2024-10-13 17:47:43.935323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.384 [2024-10-13 17:47:43.935379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:54.384 [2024-10-13 17:47:43.935391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.676 ms 00:17:54.384 [2024-10-13 17:47:43.935404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.384 [2024-10-13 17:47:43.935532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.384 [2024-10-13 17:47:43.935546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:54.384 [2024-10-13 17:47:43.935574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:17:54.384 [2024-10-13 17:47:43.935590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.384 [2024-10-13 17:47:43.956733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.384 [2024-10-13 17:47:43.956791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:54.384 [2024-10-13 17:47:43.956805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.116 ms 00:17:54.384 [2024-10-13 17:47:43.956820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.384 [2024-10-13 17:47:43.971849] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:54.384 [2024-10-13 17:47:43.976999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.384 [2024-10-13 17:47:43.977052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:54.384 [2024-10-13 17:47:43.977067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.075 ms 00:17:54.384 [2024-10-13 17:47:43.977076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.384 [2024-10-13 17:47:44.104714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.384 [2024-10-13 17:47:44.104785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:54.384 [2024-10-13 17:47:44.104807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 127.592 ms 00:17:54.384 [2024-10-13 17:47:44.104817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.384 [2024-10-13 17:47:44.105052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.384 [2024-10-13 17:47:44.105066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:54.384 [2024-10-13 17:47:44.105083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:17:54.384 [2024-10-13 17:47:44.105095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.384 [2024-10-13 17:47:44.132537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.384 [2024-10-13 17:47:44.132605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:54.384 [2024-10-13 17:47:44.132624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.361 ms 00:17:54.384 [2024-10-13 17:47:44.132633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.384 [2024-10-13 17:47:44.158780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.384 [2024-10-13 17:47:44.158873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:54.384 [2024-10-13 17:47:44.158890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.084 ms 00:17:54.384 [2024-10-13 17:47:44.158899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.384 [2024-10-13 17:47:44.159549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.384 [2024-10-13 17:47:44.159610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:54.384 [2024-10-13 17:47:44.159626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.594 ms 00:17:54.384 [2024-10-13 17:47:44.159635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.645 [2024-10-13 17:47:44.255976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.645 [2024-10-13 17:47:44.256032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:54.645 [2024-10-13 17:47:44.256053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.272 ms 00:17:54.645 [2024-10-13 17:47:44.256063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.645 [2024-10-13 17:47:44.285586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.645 [2024-10-13 17:47:44.285641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:54.645 [2024-10-13 17:47:44.285662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.401 ms 00:17:54.645 [2024-10-13 17:47:44.285671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.645 [2024-10-13 17:47:44.312681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.645 [2024-10-13 17:47:44.312732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:54.645 [2024-10-13 17:47:44.312747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.947 ms 00:17:54.645 [2024-10-13 17:47:44.312755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.645 [2024-10-13 17:47:44.339914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.645 [2024-10-13 17:47:44.339971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:54.645 [2024-10-13 17:47:44.339987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.098 ms 00:17:54.645 [2024-10-13 17:47:44.339995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.645 [2024-10-13 17:47:44.340061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.645 [2024-10-13 17:47:44.340072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:54.645 [2024-10-13 17:47:44.340087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:54.645 [2024-10-13 17:47:44.340108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.645 [2024-10-13 17:47:44.340225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.645 [2024-10-13 17:47:44.340238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:54.645 [2024-10-13 17:47:44.340251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:17:54.645 [2024-10-13 17:47:44.340260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.645 [2024-10-13 17:47:44.341716] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4554.730 ms, result 0 00:17:54.645 { 00:17:54.645 "name": "ftl0", 00:17:54.645 "uuid": "90a54ddc-e2a7-4828-bef8-d66a300cc8d9" 00:17:54.645 } 00:17:54.645 17:47:44 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:17:54.645 17:47:44 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:54.907 17:47:44 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:17:54.907 17:47:44 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:55.169 [2024-10-13 17:47:44.792889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.169 [2024-10-13 17:47:44.792969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:55.169 [2024-10-13 17:47:44.792988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:55.169 [2024-10-13 17:47:44.793011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.169 [2024-10-13 17:47:44.793041] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:55.169 [2024-10-13 17:47:44.796601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.169 [2024-10-13 17:47:44.796652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:55.169 [2024-10-13 17:47:44.796674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.533 ms 00:17:55.169 [2024-10-13 17:47:44.796684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.169 [2024-10-13 17:47:44.797001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.169 [2024-10-13 17:47:44.797013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:55.169 [2024-10-13 17:47:44.797027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:17:55.169 [2024-10-13 17:47:44.797035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.169 [2024-10-13 17:47:44.800316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.169 [2024-10-13 17:47:44.800510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:55.169 [2024-10-13 17:47:44.800537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.257 ms 00:17:55.169 [2024-10-13 17:47:44.800547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.169 [2024-10-13 17:47:44.806764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.169 [2024-10-13 17:47:44.806945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:55.170 [2024-10-13 17:47:44.806973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.154 ms 00:17:55.170 [2024-10-13 17:47:44.806983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.170 [2024-10-13 17:47:44.835531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.170 [2024-10-13 17:47:44.835598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:55.170 [2024-10-13 17:47:44.835616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.450 ms 00:17:55.170 [2024-10-13 17:47:44.835625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.170 [2024-10-13 17:47:44.854879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.170 [2024-10-13 17:47:44.854936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:55.170 [2024-10-13 17:47:44.854957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.208 ms 00:17:55.170 [2024-10-13 17:47:44.854965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.170 [2024-10-13 17:47:44.855160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.170 [2024-10-13 17:47:44.855173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:55.170 [2024-10-13 17:47:44.855186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:17:55.170 [2024-10-13 17:47:44.855194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.170 [2024-10-13 17:47:44.882374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.170 [2024-10-13 17:47:44.882591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:55.170 [2024-10-13 17:47:44.882621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.152 ms 00:17:55.170 [2024-10-13 17:47:44.882629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.170 [2024-10-13 17:47:44.909314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.170 [2024-10-13 17:47:44.909366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:55.170 [2024-10-13 17:47:44.909382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.645 ms 00:17:55.170 [2024-10-13 17:47:44.909389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.170 [2024-10-13 17:47:44.934751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.170 [2024-10-13 17:47:44.934803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:55.170 [2024-10-13 17:47:44.934817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.325 ms 00:17:55.170 [2024-10-13 17:47:44.934825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.170 [2024-10-13 17:47:44.960413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.170 [2024-10-13 17:47:44.960465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:55.170 [2024-10-13 17:47:44.960480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.504 ms 00:17:55.170 [2024-10-13 17:47:44.960486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.170 [2024-10-13 17:47:44.960517] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:55.170 [2024-10-13 17:47:44.960535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.960993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.961001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.961012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.961019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.961031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.961038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.961047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.961055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.961065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.961072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.961082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.961089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.961099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.961107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.961119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.961127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.961147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.961155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.961165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.961173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.961185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.961194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.961205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.961214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.961225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.961233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.961243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.961251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:55.170 [2024-10-13 17:47:44.961261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:55.171 [2024-10-13 17:47:44.961269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:55.171 [2024-10-13 17:47:44.961283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:55.171 [2024-10-13 17:47:44.961290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:55.171 [2024-10-13 17:47:44.961302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:55.171 [2024-10-13 17:47:44.961311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:55.171 [2024-10-13 17:47:44.961321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:55.171 [2024-10-13 17:47:44.961328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:55.171 [2024-10-13 17:47:44.961338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:55.171 [2024-10-13 17:47:44.961345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:55.171 [2024-10-13 17:47:44.961355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:55.171 [2024-10-13 17:47:44.961362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:55.171 [2024-10-13 17:47:44.961372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:55.171 [2024-10-13 17:47:44.961380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:55.171 [2024-10-13 17:47:44.961390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:55.171 [2024-10-13 17:47:44.961397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:55.171 [2024-10-13 17:47:44.961407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:55.171 [2024-10-13 17:47:44.961415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:55.171 [2024-10-13 17:47:44.961426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:55.171 [2024-10-13 17:47:44.961433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:55.171 [2024-10-13 17:47:44.961443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:55.171 [2024-10-13 17:47:44.961451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:55.171 [2024-10-13 17:47:44.961461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:55.171 [2024-10-13 17:47:44.961468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:55.171 [2024-10-13 17:47:44.961479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:55.171 [2024-10-13 17:47:44.961486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:55.171 [2024-10-13 17:47:44.961496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:55.171 [2024-10-13 17:47:44.961505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:55.171 [2024-10-13 17:47:44.961517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:55.171 [2024-10-13 17:47:44.961525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:55.171 [2024-10-13 17:47:44.961534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:55.171 [2024-10-13 17:47:44.961552] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:55.171 [2024-10-13 17:47:44.961575] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 90a54ddc-e2a7-4828-bef8-d66a300cc8d9 00:17:55.171 [2024-10-13 17:47:44.961585] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:55.171 [2024-10-13 17:47:44.961598] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:55.171 [2024-10-13 17:47:44.961609] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:55.171 [2024-10-13 17:47:44.961620] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:55.171 [2024-10-13 17:47:44.961627] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:55.171 [2024-10-13 17:47:44.961649] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:55.171 [2024-10-13 17:47:44.961657] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:55.171 [2024-10-13 17:47:44.961666] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:55.171 [2024-10-13 17:47:44.961673] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:55.171 [2024-10-13 17:47:44.961683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.171 [2024-10-13 17:47:44.961691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:55.171 [2024-10-13 17:47:44.961702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.168 ms 00:17:55.171 [2024-10-13 17:47:44.961710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.171 [2024-10-13 17:47:44.976477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.171 [2024-10-13 17:47:44.976700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:55.171 [2024-10-13 17:47:44.976725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.705 ms 00:17:55.171 [2024-10-13 17:47:44.976733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.171 [2024-10-13 17:47:44.977171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.171 [2024-10-13 17:47:44.977182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:55.171 [2024-10-13 17:47:44.977194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:17:55.171 [2024-10-13 17:47:44.977202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.433 [2024-10-13 17:47:45.028426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.433 [2024-10-13 17:47:45.028650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:55.433 [2024-10-13 17:47:45.028679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.433 [2024-10-13 17:47:45.028688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.433 [2024-10-13 17:47:45.028773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.433 [2024-10-13 17:47:45.028784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:55.433 [2024-10-13 17:47:45.028796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.433 [2024-10-13 17:47:45.028805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.433 [2024-10-13 17:47:45.028927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.433 [2024-10-13 17:47:45.028939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:55.433 [2024-10-13 17:47:45.028950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.433 [2024-10-13 17:47:45.028958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.433 [2024-10-13 17:47:45.028983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.433 [2024-10-13 17:47:45.028992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:55.433 [2024-10-13 17:47:45.029002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.433 [2024-10-13 17:47:45.029010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.433 [2024-10-13 17:47:45.122675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.433 [2024-10-13 17:47:45.122745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:55.433 [2024-10-13 17:47:45.122764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.433 [2024-10-13 17:47:45.122773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.433 [2024-10-13 17:47:45.198276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.433 [2024-10-13 17:47:45.198346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:55.433 [2024-10-13 17:47:45.198363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.433 [2024-10-13 17:47:45.198373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.433 [2024-10-13 17:47:45.198491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.433 [2024-10-13 17:47:45.198507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:55.433 [2024-10-13 17:47:45.198520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.433 [2024-10-13 17:47:45.198529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.433 [2024-10-13 17:47:45.198650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.433 [2024-10-13 17:47:45.198663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:55.433 [2024-10-13 17:47:45.198675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.433 [2024-10-13 17:47:45.198684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.433 [2024-10-13 17:47:45.198804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.433 [2024-10-13 17:47:45.198814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:55.433 [2024-10-13 17:47:45.198830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.433 [2024-10-13 17:47:45.198840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.433 [2024-10-13 17:47:45.198888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.433 [2024-10-13 17:47:45.198900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:55.433 [2024-10-13 17:47:45.198911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.433 [2024-10-13 17:47:45.198919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.433 [2024-10-13 17:47:45.198981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.433 [2024-10-13 17:47:45.198991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:55.433 [2024-10-13 17:47:45.199003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.433 [2024-10-13 17:47:45.199015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.433 [2024-10-13 17:47:45.199085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.433 [2024-10-13 17:47:45.199097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:55.433 [2024-10-13 17:47:45.199109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.433 [2024-10-13 17:47:45.199117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.433 [2024-10-13 17:47:45.199305] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 406.359 ms, result 0 00:17:55.433 true 00:17:55.433 17:47:45 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 74997 00:17:55.433 17:47:45 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 74997 ']' 00:17:55.433 17:47:45 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 74997 00:17:55.433 17:47:45 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:17:55.433 17:47:45 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:55.433 17:47:45 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74997 00:17:55.694 17:47:45 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:55.694 17:47:45 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:55.694 17:47:45 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74997' 00:17:55.694 killing process with pid 74997 00:17:55.694 17:47:45 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 74997 00:17:55.694 17:47:45 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 74997 00:18:02.288 17:47:51 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:05.590 262144+0 records in 00:18:05.590 262144+0 records out 00:18:05.590 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.84304 s, 279 MB/s 00:18:05.590 17:47:54 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:07.022 17:47:56 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:07.022 [2024-10-13 17:47:56.692618] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:18:07.022 [2024-10-13 17:47:56.692719] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75233 ] 00:18:07.283 [2024-10-13 17:47:56.840712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.283 [2024-10-13 17:47:56.962585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.544 [2024-10-13 17:47:57.295817] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:07.544 [2024-10-13 17:47:57.295909] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:07.807 [2024-10-13 17:47:57.461358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.807 [2024-10-13 17:47:57.461430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:07.807 [2024-10-13 17:47:57.461448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:07.807 [2024-10-13 17:47:57.461464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.807 [2024-10-13 17:47:57.461526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.807 [2024-10-13 17:47:57.461538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:07.807 [2024-10-13 17:47:57.461548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:07.807 [2024-10-13 17:47:57.461593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.807 [2024-10-13 17:47:57.461639] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:07.807 [2024-10-13 17:47:57.462429] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:07.807 [2024-10-13 17:47:57.462469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.807 [2024-10-13 17:47:57.462484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:07.807 [2024-10-13 17:47:57.462494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.839 ms 00:18:07.807 [2024-10-13 17:47:57.462504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.807 [2024-10-13 17:47:57.464836] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:07.807 [2024-10-13 17:47:57.480368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.807 [2024-10-13 17:47:57.480423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:07.807 [2024-10-13 17:47:57.480438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.534 ms 00:18:07.807 [2024-10-13 17:47:57.480447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.807 [2024-10-13 17:47:57.480537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.807 [2024-10-13 17:47:57.480547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:07.807 [2024-10-13 17:47:57.480589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:07.807 [2024-10-13 17:47:57.480603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.807 [2024-10-13 17:47:57.492390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.807 [2024-10-13 17:47:57.492441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:07.807 [2024-10-13 17:47:57.492455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.687 ms 00:18:07.807 [2024-10-13 17:47:57.492463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.807 [2024-10-13 17:47:57.492584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.807 [2024-10-13 17:47:57.492599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:07.807 [2024-10-13 17:47:57.492615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:18:07.807 [2024-10-13 17:47:57.492628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.807 [2024-10-13 17:47:57.492715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.807 [2024-10-13 17:47:57.492730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:07.807 [2024-10-13 17:47:57.492740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:07.807 [2024-10-13 17:47:57.492749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.807 [2024-10-13 17:47:57.492775] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:07.807 [2024-10-13 17:47:57.497500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.807 [2024-10-13 17:47:57.497548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:07.807 [2024-10-13 17:47:57.497575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.730 ms 00:18:07.807 [2024-10-13 17:47:57.497589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.807 [2024-10-13 17:47:57.497659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.807 [2024-10-13 17:47:57.497673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:07.807 [2024-10-13 17:47:57.497688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:07.807 [2024-10-13 17:47:57.497703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.807 [2024-10-13 17:47:57.497745] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:07.807 [2024-10-13 17:47:57.497773] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:07.807 [2024-10-13 17:47:57.497815] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:07.807 [2024-10-13 17:47:57.497835] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:07.807 [2024-10-13 17:47:57.497947] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:07.808 [2024-10-13 17:47:57.497959] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:07.808 [2024-10-13 17:47:57.497971] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:07.808 [2024-10-13 17:47:57.497981] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:07.808 [2024-10-13 17:47:57.497991] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:07.808 [2024-10-13 17:47:57.497999] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:07.808 [2024-10-13 17:47:57.498009] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:07.808 [2024-10-13 17:47:57.498017] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:07.808 [2024-10-13 17:47:57.498026] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:07.808 [2024-10-13 17:47:57.498034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.808 [2024-10-13 17:47:57.498046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:07.808 [2024-10-13 17:47:57.498056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:18:07.808 [2024-10-13 17:47:57.498064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.808 [2024-10-13 17:47:57.498153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.808 [2024-10-13 17:47:57.498163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:07.808 [2024-10-13 17:47:57.498171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:07.808 [2024-10-13 17:47:57.498180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.808 [2024-10-13 17:47:57.498286] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:07.808 [2024-10-13 17:47:57.498297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:07.808 [2024-10-13 17:47:57.498309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:07.808 [2024-10-13 17:47:57.498317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:07.808 [2024-10-13 17:47:57.498325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:07.808 [2024-10-13 17:47:57.498332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:07.808 [2024-10-13 17:47:57.498339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:07.808 [2024-10-13 17:47:57.498347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:07.808 [2024-10-13 17:47:57.498354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:07.808 [2024-10-13 17:47:57.498361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:07.808 [2024-10-13 17:47:57.498369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:07.808 [2024-10-13 17:47:57.498376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:07.808 [2024-10-13 17:47:57.498382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:07.808 [2024-10-13 17:47:57.498389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:07.808 [2024-10-13 17:47:57.498397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:07.808 [2024-10-13 17:47:57.498412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:07.808 [2024-10-13 17:47:57.498422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:07.808 [2024-10-13 17:47:57.498430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:07.808 [2024-10-13 17:47:57.498437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:07.808 [2024-10-13 17:47:57.498444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:07.808 [2024-10-13 17:47:57.498451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:07.808 [2024-10-13 17:47:57.498460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:07.808 [2024-10-13 17:47:57.498466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:07.808 [2024-10-13 17:47:57.498473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:07.808 [2024-10-13 17:47:57.498480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:07.808 [2024-10-13 17:47:57.498487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:07.808 [2024-10-13 17:47:57.498493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:07.808 [2024-10-13 17:47:57.498499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:07.808 [2024-10-13 17:47:57.498505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:07.808 [2024-10-13 17:47:57.498512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:07.808 [2024-10-13 17:47:57.498520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:07.808 [2024-10-13 17:47:57.498527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:07.808 [2024-10-13 17:47:57.498534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:07.808 [2024-10-13 17:47:57.498540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:07.808 [2024-10-13 17:47:57.498547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:07.808 [2024-10-13 17:47:57.498553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:07.808 [2024-10-13 17:47:57.498578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:07.808 [2024-10-13 17:47:57.498585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:07.808 [2024-10-13 17:47:57.498593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:07.808 [2024-10-13 17:47:57.498600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:07.808 [2024-10-13 17:47:57.498608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:07.808 [2024-10-13 17:47:57.498615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:07.808 [2024-10-13 17:47:57.498621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:07.808 [2024-10-13 17:47:57.498629] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:07.808 [2024-10-13 17:47:57.498637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:07.808 [2024-10-13 17:47:57.498645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:07.808 [2024-10-13 17:47:57.498652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:07.808 [2024-10-13 17:47:57.498661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:07.808 [2024-10-13 17:47:57.498671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:07.808 [2024-10-13 17:47:57.498679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:07.808 [2024-10-13 17:47:57.498686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:07.808 [2024-10-13 17:47:57.498693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:07.808 [2024-10-13 17:47:57.498701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:07.808 [2024-10-13 17:47:57.498710] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:07.808 [2024-10-13 17:47:57.498721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:07.808 [2024-10-13 17:47:57.498730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:07.808 [2024-10-13 17:47:57.498739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:07.808 [2024-10-13 17:47:57.498747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:07.808 [2024-10-13 17:47:57.498754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:07.808 [2024-10-13 17:47:57.498761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:07.808 [2024-10-13 17:47:57.498768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:07.808 [2024-10-13 17:47:57.498775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:07.808 [2024-10-13 17:47:57.498782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:07.808 [2024-10-13 17:47:57.498789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:07.808 [2024-10-13 17:47:57.498796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:07.808 [2024-10-13 17:47:57.498803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:07.808 [2024-10-13 17:47:57.498810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:07.808 [2024-10-13 17:47:57.498817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:07.808 [2024-10-13 17:47:57.498824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:07.808 [2024-10-13 17:47:57.498832] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:07.808 [2024-10-13 17:47:57.498841] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:07.808 [2024-10-13 17:47:57.498855] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:07.808 [2024-10-13 17:47:57.498862] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:07.808 [2024-10-13 17:47:57.498871] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:07.808 [2024-10-13 17:47:57.498878] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:07.808 [2024-10-13 17:47:57.498886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.808 [2024-10-13 17:47:57.498895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:07.808 [2024-10-13 17:47:57.498904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.669 ms 00:18:07.808 [2024-10-13 17:47:57.498912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.808 [2024-10-13 17:47:57.537806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.808 [2024-10-13 17:47:57.537865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:07.808 [2024-10-13 17:47:57.537879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.847 ms 00:18:07.808 [2024-10-13 17:47:57.537889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.808 [2024-10-13 17:47:57.537982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.808 [2024-10-13 17:47:57.537997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:07.808 [2024-10-13 17:47:57.538007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:18:07.808 [2024-10-13 17:47:57.538016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.808 [2024-10-13 17:47:57.584919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.808 [2024-10-13 17:47:57.584979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:07.809 [2024-10-13 17:47:57.584994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.837 ms 00:18:07.809 [2024-10-13 17:47:57.585004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.809 [2024-10-13 17:47:57.585057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.809 [2024-10-13 17:47:57.585068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:07.809 [2024-10-13 17:47:57.585079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:07.809 [2024-10-13 17:47:57.585088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.809 [2024-10-13 17:47:57.585892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.809 [2024-10-13 17:47:57.586104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:07.809 [2024-10-13 17:47:57.586125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.713 ms 00:18:07.809 [2024-10-13 17:47:57.586134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.809 [2024-10-13 17:47:57.586325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.809 [2024-10-13 17:47:57.586337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:07.809 [2024-10-13 17:47:57.586346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:18:07.809 [2024-10-13 17:47:57.586354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.809 [2024-10-13 17:47:57.604743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.809 [2024-10-13 17:47:57.604953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:07.809 [2024-10-13 17:47:57.604974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.363 ms 00:18:07.809 [2024-10-13 17:47:57.604991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.071 [2024-10-13 17:47:57.621495] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:08.071 [2024-10-13 17:47:57.621576] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:08.071 [2024-10-13 17:47:57.621596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.071 [2024-10-13 17:47:57.621610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:08.071 [2024-10-13 17:47:57.621625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.472 ms 00:18:08.071 [2024-10-13 17:47:57.621638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.071 [2024-10-13 17:47:57.649253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.071 [2024-10-13 17:47:57.649332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:08.071 [2024-10-13 17:47:57.649347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.550 ms 00:18:08.071 [2024-10-13 17:47:57.649365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.071 [2024-10-13 17:47:57.662994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.071 [2024-10-13 17:47:57.663056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:08.071 [2024-10-13 17:47:57.663069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.557 ms 00:18:08.071 [2024-10-13 17:47:57.663078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.071 [2024-10-13 17:47:57.676563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.071 [2024-10-13 17:47:57.676614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:08.071 [2024-10-13 17:47:57.676627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.416 ms 00:18:08.071 [2024-10-13 17:47:57.676635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.071 [2024-10-13 17:47:57.677328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.071 [2024-10-13 17:47:57.677354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:08.071 [2024-10-13 17:47:57.677366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:18:08.071 [2024-10-13 17:47:57.677375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.071 [2024-10-13 17:47:57.752315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.071 [2024-10-13 17:47:57.752382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:08.071 [2024-10-13 17:47:57.752400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.920 ms 00:18:08.071 [2024-10-13 17:47:57.752410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.071 [2024-10-13 17:47:57.764064] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:08.071 [2024-10-13 17:47:57.768292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.071 [2024-10-13 17:47:57.768340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:08.071 [2024-10-13 17:47:57.768355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.805 ms 00:18:08.071 [2024-10-13 17:47:57.768365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.071 [2024-10-13 17:47:57.768471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.071 [2024-10-13 17:47:57.768485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:08.071 [2024-10-13 17:47:57.768496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:08.071 [2024-10-13 17:47:57.768505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.071 [2024-10-13 17:47:57.768633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.071 [2024-10-13 17:47:57.768658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:08.071 [2024-10-13 17:47:57.768673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:18:08.071 [2024-10-13 17:47:57.768687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.071 [2024-10-13 17:47:57.768721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.071 [2024-10-13 17:47:57.768736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:08.071 [2024-10-13 17:47:57.768749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:08.071 [2024-10-13 17:47:57.768761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.071 [2024-10-13 17:47:57.768814] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:08.071 [2024-10-13 17:47:57.768829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.071 [2024-10-13 17:47:57.768843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:08.071 [2024-10-13 17:47:57.768856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:08.071 [2024-10-13 17:47:57.768865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.071 [2024-10-13 17:47:57.796867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.071 [2024-10-13 17:47:57.796925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:08.071 [2024-10-13 17:47:57.796940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.979 ms 00:18:08.071 [2024-10-13 17:47:57.796949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.071 [2024-10-13 17:47:57.797053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.071 [2024-10-13 17:47:57.797068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:08.071 [2024-10-13 17:47:57.797079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:18:08.071 [2024-10-13 17:47:57.797088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.071 [2024-10-13 17:47:57.799534] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 337.587 ms, result 0 00:18:09.015  [2024-10-13T17:48:00.217Z] Copying: 15/1024 [MB] (15 MBps) [2024-10-13T17:48:01.163Z] Copying: 29/1024 [MB] (13 MBps) [2024-10-13T17:48:02.107Z] Copying: 45/1024 [MB] (16 MBps) [2024-10-13T17:48:03.048Z] Copying: 59/1024 [MB] (13 MBps) [2024-10-13T17:48:03.990Z] Copying: 85/1024 [MB] (26 MBps) [2024-10-13T17:48:04.932Z] Copying: 97/1024 [MB] (11 MBps) [2024-10-13T17:48:05.876Z] Copying: 113/1024 [MB] (15 MBps) [2024-10-13T17:48:06.819Z] Copying: 141/1024 [MB] (28 MBps) [2024-10-13T17:48:08.206Z] Copying: 152/1024 [MB] (11 MBps) [2024-10-13T17:48:09.150Z] Copying: 164/1024 [MB] (12 MBps) [2024-10-13T17:48:10.094Z] Copying: 179/1024 [MB] (14 MBps) [2024-10-13T17:48:11.037Z] Copying: 190/1024 [MB] (10 MBps) [2024-10-13T17:48:11.980Z] Copying: 204/1024 [MB] (14 MBps) [2024-10-13T17:48:12.926Z] Copying: 227/1024 [MB] (22 MBps) [2024-10-13T17:48:13.919Z] Copying: 240/1024 [MB] (12 MBps) [2024-10-13T17:48:14.864Z] Copying: 250/1024 [MB] (10 MBps) [2024-10-13T17:48:16.253Z] Copying: 265/1024 [MB] (14 MBps) [2024-10-13T17:48:16.825Z] Copying: 278/1024 [MB] (12 MBps) [2024-10-13T17:48:18.213Z] Copying: 295/1024 [MB] (16 MBps) [2024-10-13T17:48:19.157Z] Copying: 308/1024 [MB] (13 MBps) [2024-10-13T17:48:20.101Z] Copying: 325/1024 [MB] (16 MBps) [2024-10-13T17:48:21.045Z] Copying: 343/1024 [MB] (18 MBps) [2024-10-13T17:48:21.992Z] Copying: 356/1024 [MB] (12 MBps) [2024-10-13T17:48:22.936Z] Copying: 370/1024 [MB] (14 MBps) [2024-10-13T17:48:23.879Z] Copying: 382/1024 [MB] (12 MBps) [2024-10-13T17:48:24.823Z] Copying: 399/1024 [MB] (16 MBps) [2024-10-13T17:48:25.870Z] Copying: 418/1024 [MB] (18 MBps) [2024-10-13T17:48:26.814Z] Copying: 440/1024 [MB] (21 MBps) [2024-10-13T17:48:28.203Z] Copying: 456/1024 [MB] (16 MBps) [2024-10-13T17:48:29.148Z] Copying: 471/1024 [MB] (14 MBps) [2024-10-13T17:48:30.093Z] Copying: 486/1024 [MB] (14 MBps) [2024-10-13T17:48:31.037Z] Copying: 497/1024 [MB] (11 MBps) [2024-10-13T17:48:31.981Z] Copying: 511/1024 [MB] (14 MBps) [2024-10-13T17:48:32.927Z] Copying: 530/1024 [MB] (18 MBps) [2024-10-13T17:48:33.872Z] Copying: 544/1024 [MB] (14 MBps) [2024-10-13T17:48:34.816Z] Copying: 559/1024 [MB] (14 MBps) [2024-10-13T17:48:36.205Z] Copying: 572/1024 [MB] (12 MBps) [2024-10-13T17:48:37.150Z] Copying: 587/1024 [MB] (15 MBps) [2024-10-13T17:48:38.095Z] Copying: 600/1024 [MB] (13 MBps) [2024-10-13T17:48:39.039Z] Copying: 616/1024 [MB] (15 MBps) [2024-10-13T17:48:39.984Z] Copying: 642/1024 [MB] (26 MBps) [2024-10-13T17:48:40.929Z] Copying: 656/1024 [MB] (14 MBps) [2024-10-13T17:48:41.873Z] Copying: 670/1024 [MB] (14 MBps) [2024-10-13T17:48:42.817Z] Copying: 686/1024 [MB] (15 MBps) [2024-10-13T17:48:44.205Z] Copying: 705/1024 [MB] (19 MBps) [2024-10-13T17:48:45.149Z] Copying: 723/1024 [MB] (17 MBps) [2024-10-13T17:48:46.093Z] Copying: 738/1024 [MB] (15 MBps) [2024-10-13T17:48:47.038Z] Copying: 752/1024 [MB] (14 MBps) [2024-10-13T17:48:47.982Z] Copying: 766/1024 [MB] (13 MBps) [2024-10-13T17:48:48.926Z] Copying: 792/1024 [MB] (25 MBps) [2024-10-13T17:48:49.872Z] Copying: 803/1024 [MB] (11 MBps) [2024-10-13T17:48:50.818Z] Copying: 813/1024 [MB] (10 MBps) [2024-10-13T17:48:52.208Z] Copying: 828/1024 [MB] (15 MBps) [2024-10-13T17:48:53.154Z] Copying: 842/1024 [MB] (13 MBps) [2024-10-13T17:48:54.178Z] Copying: 858/1024 [MB] (16 MBps) [2024-10-13T17:48:55.122Z] Copying: 874/1024 [MB] (16 MBps) [2024-10-13T17:48:56.066Z] Copying: 886/1024 [MB] (11 MBps) [2024-10-13T17:48:57.010Z] Copying: 908/1024 [MB] (22 MBps) [2024-10-13T17:48:57.953Z] Copying: 931/1024 [MB] (22 MBps) [2024-10-13T17:48:58.894Z] Copying: 948/1024 [MB] (17 MBps) [2024-10-13T17:48:59.839Z] Copying: 970/1024 [MB] (21 MBps) [2024-10-13T17:49:01.226Z] Copying: 983/1024 [MB] (12 MBps) [2024-10-13T17:49:02.170Z] Copying: 997/1024 [MB] (14 MBps) [2024-10-13T17:49:02.742Z] Copying: 1013/1024 [MB] (15 MBps) [2024-10-13T17:49:02.742Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-10-13 17:49:02.612114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.928 [2024-10-13 17:49:02.612155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:12.928 [2024-10-13 17:49:02.612168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:12.928 [2024-10-13 17:49:02.612175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.928 [2024-10-13 17:49:02.612191] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:12.928 [2024-10-13 17:49:02.614462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.928 [2024-10-13 17:49:02.614489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:12.928 [2024-10-13 17:49:02.614498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.258 ms 00:19:12.928 [2024-10-13 17:49:02.614505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.928 [2024-10-13 17:49:02.616109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.928 [2024-10-13 17:49:02.616218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:12.928 [2024-10-13 17:49:02.616231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.588 ms 00:19:12.928 [2024-10-13 17:49:02.616238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.928 [2024-10-13 17:49:02.630037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.928 [2024-10-13 17:49:02.630067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:12.928 [2024-10-13 17:49:02.630076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.786 ms 00:19:12.928 [2024-10-13 17:49:02.630081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.928 [2024-10-13 17:49:02.634703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.928 [2024-10-13 17:49:02.634798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:12.928 [2024-10-13 17:49:02.634814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.598 ms 00:19:12.928 [2024-10-13 17:49:02.634820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.928 [2024-10-13 17:49:02.653311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.928 [2024-10-13 17:49:02.653337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:12.928 [2024-10-13 17:49:02.653345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.453 ms 00:19:12.928 [2024-10-13 17:49:02.653351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.928 [2024-10-13 17:49:02.665229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.928 [2024-10-13 17:49:02.665332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:12.928 [2024-10-13 17:49:02.665346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.852 ms 00:19:12.928 [2024-10-13 17:49:02.665352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.928 [2024-10-13 17:49:02.665438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.928 [2024-10-13 17:49:02.665446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:12.928 [2024-10-13 17:49:02.665454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:19:12.928 [2024-10-13 17:49:02.665464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.928 [2024-10-13 17:49:02.683324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.928 [2024-10-13 17:49:02.683349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:12.928 [2024-10-13 17:49:02.683357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.850 ms 00:19:12.928 [2024-10-13 17:49:02.683362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.928 [2024-10-13 17:49:02.700679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.928 [2024-10-13 17:49:02.700703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:12.928 [2024-10-13 17:49:02.700719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.293 ms 00:19:12.928 [2024-10-13 17:49:02.700725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.928 [2024-10-13 17:49:02.717890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.928 [2024-10-13 17:49:02.717914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:12.928 [2024-10-13 17:49:02.717922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.140 ms 00:19:12.928 [2024-10-13 17:49:02.717927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.928 [2024-10-13 17:49:02.735021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.928 [2024-10-13 17:49:02.735118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:12.928 [2024-10-13 17:49:02.735130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.052 ms 00:19:12.929 [2024-10-13 17:49:02.735135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.929 [2024-10-13 17:49:02.735158] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:12.929 [2024-10-13 17:49:02.735169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:12.929 [2024-10-13 17:49:02.735712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:12.930 [2024-10-13 17:49:02.735718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:12.930 [2024-10-13 17:49:02.735724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:12.930 [2024-10-13 17:49:02.735730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:12.930 [2024-10-13 17:49:02.735735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:12.930 [2024-10-13 17:49:02.735742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:12.930 [2024-10-13 17:49:02.735747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:12.930 [2024-10-13 17:49:02.735759] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:12.930 [2024-10-13 17:49:02.735766] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 90a54ddc-e2a7-4828-bef8-d66a300cc8d9 00:19:12.930 [2024-10-13 17:49:02.735776] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:12.930 [2024-10-13 17:49:02.735782] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:12.930 [2024-10-13 17:49:02.735799] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:12.930 [2024-10-13 17:49:02.735805] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:12.930 [2024-10-13 17:49:02.735811] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:12.930 [2024-10-13 17:49:02.735817] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:12.930 [2024-10-13 17:49:02.735823] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:12.930 [2024-10-13 17:49:02.735833] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:12.930 [2024-10-13 17:49:02.735837] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:12.930 [2024-10-13 17:49:02.735843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.930 [2024-10-13 17:49:02.735848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:12.930 [2024-10-13 17:49:02.735854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.686 ms 00:19:12.930 [2024-10-13 17:49:02.735860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.191 [2024-10-13 17:49:02.745738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.191 [2024-10-13 17:49:02.745866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:13.191 [2024-10-13 17:49:02.745877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.866 ms 00:19:13.191 [2024-10-13 17:49:02.745884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.191 [2024-10-13 17:49:02.746169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.191 [2024-10-13 17:49:02.746176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:13.191 [2024-10-13 17:49:02.746182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:19:13.191 [2024-10-13 17:49:02.746188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.191 [2024-10-13 17:49:02.773310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.191 [2024-10-13 17:49:02.773406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:13.191 [2024-10-13 17:49:02.773417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.191 [2024-10-13 17:49:02.773423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.191 [2024-10-13 17:49:02.773464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.191 [2024-10-13 17:49:02.773470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:13.191 [2024-10-13 17:49:02.773477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.191 [2024-10-13 17:49:02.773482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.191 [2024-10-13 17:49:02.773522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.191 [2024-10-13 17:49:02.773533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:13.191 [2024-10-13 17:49:02.773540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.191 [2024-10-13 17:49:02.773546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.191 [2024-10-13 17:49:02.773575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.191 [2024-10-13 17:49:02.773582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:13.191 [2024-10-13 17:49:02.773587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.191 [2024-10-13 17:49:02.773593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.191 [2024-10-13 17:49:02.835418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.191 [2024-10-13 17:49:02.835456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:13.191 [2024-10-13 17:49:02.835466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.191 [2024-10-13 17:49:02.835473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.191 [2024-10-13 17:49:02.885975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.191 [2024-10-13 17:49:02.886014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:13.191 [2024-10-13 17:49:02.886024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.191 [2024-10-13 17:49:02.886030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.191 [2024-10-13 17:49:02.886101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.191 [2024-10-13 17:49:02.886110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:13.191 [2024-10-13 17:49:02.886121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.191 [2024-10-13 17:49:02.886128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.191 [2024-10-13 17:49:02.886160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.191 [2024-10-13 17:49:02.886168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:13.191 [2024-10-13 17:49:02.886175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.191 [2024-10-13 17:49:02.886181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.191 [2024-10-13 17:49:02.886258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.191 [2024-10-13 17:49:02.886266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:13.191 [2024-10-13 17:49:02.886274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.191 [2024-10-13 17:49:02.886281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.191 [2024-10-13 17:49:02.886305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.191 [2024-10-13 17:49:02.886312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:13.191 [2024-10-13 17:49:02.886319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.191 [2024-10-13 17:49:02.886325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.191 [2024-10-13 17:49:02.886361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.191 [2024-10-13 17:49:02.886368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:13.191 [2024-10-13 17:49:02.886375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.191 [2024-10-13 17:49:02.886384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.191 [2024-10-13 17:49:02.886424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.191 [2024-10-13 17:49:02.886432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:13.191 [2024-10-13 17:49:02.886438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.191 [2024-10-13 17:49:02.886445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.191 [2024-10-13 17:49:02.886572] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 274.410 ms, result 0 00:19:14.134 00:19:14.134 00:19:14.134 17:49:03 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:14.134 [2024-10-13 17:49:03.765646] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:19:14.134 [2024-10-13 17:49:03.765772] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75926 ] 00:19:14.134 [2024-10-13 17:49:03.915095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.395 [2024-10-13 17:49:04.010628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.657 [2024-10-13 17:49:04.238354] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:14.657 [2024-10-13 17:49:04.238513] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:14.657 [2024-10-13 17:49:04.391234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.657 [2024-10-13 17:49:04.391389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:14.657 [2024-10-13 17:49:04.391459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:14.657 [2024-10-13 17:49:04.391492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.657 [2024-10-13 17:49:04.391576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.657 [2024-10-13 17:49:04.391606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:14.657 [2024-10-13 17:49:04.391626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:19:14.657 [2024-10-13 17:49:04.391692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.657 [2024-10-13 17:49:04.391732] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:14.657 [2024-10-13 17:49:04.392471] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:14.657 [2024-10-13 17:49:04.392600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.657 [2024-10-13 17:49:04.392656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:14.657 [2024-10-13 17:49:04.392680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.874 ms 00:19:14.657 [2024-10-13 17:49:04.392729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.657 [2024-10-13 17:49:04.394649] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:14.657 [2024-10-13 17:49:04.408135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.657 [2024-10-13 17:49:04.408265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:14.657 [2024-10-13 17:49:04.408325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.489 ms 00:19:14.657 [2024-10-13 17:49:04.408349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.657 [2024-10-13 17:49:04.408454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.657 [2024-10-13 17:49:04.408586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:14.657 [2024-10-13 17:49:04.408612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:14.657 [2024-10-13 17:49:04.408675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.657 [2024-10-13 17:49:04.415990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.657 [2024-10-13 17:49:04.416105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:14.657 [2024-10-13 17:49:04.416152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.225 ms 00:19:14.657 [2024-10-13 17:49:04.416173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.657 [2024-10-13 17:49:04.416263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.657 [2024-10-13 17:49:04.416286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:14.657 [2024-10-13 17:49:04.416305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:19:14.657 [2024-10-13 17:49:04.416323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.657 [2024-10-13 17:49:04.416372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.657 [2024-10-13 17:49:04.416397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:14.657 [2024-10-13 17:49:04.416482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:14.657 [2024-10-13 17:49:04.416505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.657 [2024-10-13 17:49:04.416543] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:14.657 [2024-10-13 17:49:04.420225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.657 [2024-10-13 17:49:04.420324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:14.657 [2024-10-13 17:49:04.420373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.689 ms 00:19:14.657 [2024-10-13 17:49:04.420395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.657 [2024-10-13 17:49:04.420448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.657 [2024-10-13 17:49:04.420470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:14.657 [2024-10-13 17:49:04.420490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:14.657 [2024-10-13 17:49:04.420508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.657 [2024-10-13 17:49:04.420573] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:14.657 [2024-10-13 17:49:04.420657] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:14.657 [2024-10-13 17:49:04.420720] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:14.657 [2024-10-13 17:49:04.420759] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:14.657 [2024-10-13 17:49:04.420887] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:14.657 [2024-10-13 17:49:04.420920] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:14.657 [2024-10-13 17:49:04.420986] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:14.657 [2024-10-13 17:49:04.421019] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:14.657 [2024-10-13 17:49:04.421069] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:14.657 [2024-10-13 17:49:04.421102] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:14.657 [2024-10-13 17:49:04.421121] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:14.657 [2024-10-13 17:49:04.421431] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:14.657 [2024-10-13 17:49:04.421479] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:14.657 [2024-10-13 17:49:04.421502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.657 [2024-10-13 17:49:04.421530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:14.657 [2024-10-13 17:49:04.421551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.930 ms 00:19:14.657 [2024-10-13 17:49:04.421586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.657 [2024-10-13 17:49:04.421702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.658 [2024-10-13 17:49:04.421811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:14.658 [2024-10-13 17:49:04.421835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:19:14.658 [2024-10-13 17:49:04.421854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.658 [2024-10-13 17:49:04.421976] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:14.658 [2024-10-13 17:49:04.422356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:14.658 [2024-10-13 17:49:04.422437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:14.658 [2024-10-13 17:49:04.422542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.658 [2024-10-13 17:49:04.422583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:14.658 [2024-10-13 17:49:04.422603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:14.658 [2024-10-13 17:49:04.422622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:14.658 [2024-10-13 17:49:04.422640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:14.658 [2024-10-13 17:49:04.422658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:14.658 [2024-10-13 17:49:04.422677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:14.658 [2024-10-13 17:49:04.422694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:14.658 [2024-10-13 17:49:04.422712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:14.658 [2024-10-13 17:49:04.422805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:14.658 [2024-10-13 17:49:04.422829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:14.658 [2024-10-13 17:49:04.422847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:14.658 [2024-10-13 17:49:04.422874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.658 [2024-10-13 17:49:04.422892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:14.658 [2024-10-13 17:49:04.422911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:14.658 [2024-10-13 17:49:04.422929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.658 [2024-10-13 17:49:04.422947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:14.658 [2024-10-13 17:49:04.422965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:14.658 [2024-10-13 17:49:04.422983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:14.658 [2024-10-13 17:49:04.423041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:14.658 [2024-10-13 17:49:04.423052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:14.658 [2024-10-13 17:49:04.423059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:14.658 [2024-10-13 17:49:04.423066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:14.658 [2024-10-13 17:49:04.423073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:14.658 [2024-10-13 17:49:04.423080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:14.658 [2024-10-13 17:49:04.423086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:14.658 [2024-10-13 17:49:04.423093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:14.658 [2024-10-13 17:49:04.423100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:14.658 [2024-10-13 17:49:04.423107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:14.658 [2024-10-13 17:49:04.423113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:14.658 [2024-10-13 17:49:04.423120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:14.658 [2024-10-13 17:49:04.423126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:14.658 [2024-10-13 17:49:04.423133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:14.658 [2024-10-13 17:49:04.423139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:14.658 [2024-10-13 17:49:04.423147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:14.658 [2024-10-13 17:49:04.423153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:14.658 [2024-10-13 17:49:04.423160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.658 [2024-10-13 17:49:04.423166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:14.658 [2024-10-13 17:49:04.423173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:14.658 [2024-10-13 17:49:04.423179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.658 [2024-10-13 17:49:04.423186] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:14.658 [2024-10-13 17:49:04.423195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:14.658 [2024-10-13 17:49:04.423204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:14.658 [2024-10-13 17:49:04.423211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.658 [2024-10-13 17:49:04.423219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:14.658 [2024-10-13 17:49:04.423226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:14.658 [2024-10-13 17:49:04.423233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:14.658 [2024-10-13 17:49:04.423240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:14.658 [2024-10-13 17:49:04.423246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:14.658 [2024-10-13 17:49:04.423253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:14.658 [2024-10-13 17:49:04.423262] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:14.658 [2024-10-13 17:49:04.423273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:14.658 [2024-10-13 17:49:04.423282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:14.658 [2024-10-13 17:49:04.423289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:14.658 [2024-10-13 17:49:04.423298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:14.658 [2024-10-13 17:49:04.423305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:14.658 [2024-10-13 17:49:04.423313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:14.658 [2024-10-13 17:49:04.423320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:14.658 [2024-10-13 17:49:04.423327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:14.658 [2024-10-13 17:49:04.423334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:14.658 [2024-10-13 17:49:04.423341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:14.658 [2024-10-13 17:49:04.423348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:14.658 [2024-10-13 17:49:04.423355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:14.658 [2024-10-13 17:49:04.423362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:14.658 [2024-10-13 17:49:04.423369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:14.658 [2024-10-13 17:49:04.423377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:14.658 [2024-10-13 17:49:04.423385] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:14.658 [2024-10-13 17:49:04.423394] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:14.658 [2024-10-13 17:49:04.423406] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:14.658 [2024-10-13 17:49:04.423413] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:14.658 [2024-10-13 17:49:04.423421] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:14.658 [2024-10-13 17:49:04.423430] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:14.658 [2024-10-13 17:49:04.423439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.658 [2024-10-13 17:49:04.423447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:14.658 [2024-10-13 17:49:04.423455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.532 ms 00:19:14.658 [2024-10-13 17:49:04.423462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.658 [2024-10-13 17:49:04.454167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.658 [2024-10-13 17:49:04.454210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:14.658 [2024-10-13 17:49:04.454221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.654 ms 00:19:14.658 [2024-10-13 17:49:04.454230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.658 [2024-10-13 17:49:04.454311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.658 [2024-10-13 17:49:04.454325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:14.658 [2024-10-13 17:49:04.454333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:19:14.658 [2024-10-13 17:49:04.454341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.920 [2024-10-13 17:49:04.502507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.920 [2024-10-13 17:49:04.502573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:14.920 [2024-10-13 17:49:04.502587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.107 ms 00:19:14.920 [2024-10-13 17:49:04.502596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.920 [2024-10-13 17:49:04.502639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.920 [2024-10-13 17:49:04.502649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:14.920 [2024-10-13 17:49:04.502658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:14.920 [2024-10-13 17:49:04.502667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.920 [2024-10-13 17:49:04.503244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.920 [2024-10-13 17:49:04.503284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:14.920 [2024-10-13 17:49:04.503295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.500 ms 00:19:14.920 [2024-10-13 17:49:04.503303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.920 [2024-10-13 17:49:04.503458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.920 [2024-10-13 17:49:04.503468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:14.920 [2024-10-13 17:49:04.503476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:19:14.920 [2024-10-13 17:49:04.503484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.920 [2024-10-13 17:49:04.519697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.920 [2024-10-13 17:49:04.519737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:14.920 [2024-10-13 17:49:04.519749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.188 ms 00:19:14.920 [2024-10-13 17:49:04.519760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.920 [2024-10-13 17:49:04.534215] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:14.920 [2024-10-13 17:49:04.534262] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:14.920 [2024-10-13 17:49:04.534275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.920 [2024-10-13 17:49:04.534284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:14.920 [2024-10-13 17:49:04.534293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.407 ms 00:19:14.920 [2024-10-13 17:49:04.534301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.920 [2024-10-13 17:49:04.560241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.920 [2024-10-13 17:49:04.560310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:14.920 [2024-10-13 17:49:04.560330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.886 ms 00:19:14.920 [2024-10-13 17:49:04.560339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.920 [2024-10-13 17:49:04.573138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.920 [2024-10-13 17:49:04.573186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:14.920 [2024-10-13 17:49:04.573199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.740 ms 00:19:14.920 [2024-10-13 17:49:04.573207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.920 [2024-10-13 17:49:04.585941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.920 [2024-10-13 17:49:04.585987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:14.920 [2024-10-13 17:49:04.586000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.684 ms 00:19:14.920 [2024-10-13 17:49:04.586008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.920 [2024-10-13 17:49:04.586687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.920 [2024-10-13 17:49:04.586714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:14.920 [2024-10-13 17:49:04.586726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:19:14.920 [2024-10-13 17:49:04.586734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.920 [2024-10-13 17:49:04.659821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.920 [2024-10-13 17:49:04.660087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:14.920 [2024-10-13 17:49:04.660115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.061 ms 00:19:14.920 [2024-10-13 17:49:04.660135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.920 [2024-10-13 17:49:04.673923] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:14.920 [2024-10-13 17:49:04.678374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.920 [2024-10-13 17:49:04.678425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:14.920 [2024-10-13 17:49:04.678438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.137 ms 00:19:14.920 [2024-10-13 17:49:04.678447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.920 [2024-10-13 17:49:04.678589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.920 [2024-10-13 17:49:04.678603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:14.920 [2024-10-13 17:49:04.678614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:19:14.920 [2024-10-13 17:49:04.678624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.920 [2024-10-13 17:49:04.678726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.920 [2024-10-13 17:49:04.678738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:14.920 [2024-10-13 17:49:04.678747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:19:14.920 [2024-10-13 17:49:04.678758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.920 [2024-10-13 17:49:04.678786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.920 [2024-10-13 17:49:04.678795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:14.920 [2024-10-13 17:49:04.678805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:14.921 [2024-10-13 17:49:04.678815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.921 [2024-10-13 17:49:04.678856] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:14.921 [2024-10-13 17:49:04.678870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.921 [2024-10-13 17:49:04.678882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:14.921 [2024-10-13 17:49:04.678892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:14.921 [2024-10-13 17:49:04.678901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.921 [2024-10-13 17:49:04.706230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.921 [2024-10-13 17:49:04.706342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:14.921 [2024-10-13 17:49:04.706362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.306 ms 00:19:14.921 [2024-10-13 17:49:04.706373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.921 [2024-10-13 17:49:04.706477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.921 [2024-10-13 17:49:04.706490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:14.921 [2024-10-13 17:49:04.706500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:19:14.921 [2024-10-13 17:49:04.706509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.921 [2024-10-13 17:49:04.708191] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 316.325 ms, result 0 00:19:16.309  [2024-10-13T17:49:07.066Z] Copying: 13/1024 [MB] (13 MBps) [2024-10-13T17:49:08.009Z] Copying: 29/1024 [MB] (16 MBps) [2024-10-13T17:49:08.953Z] Copying: 42/1024 [MB] (13 MBps) [2024-10-13T17:49:09.950Z] Copying: 56/1024 [MB] (13 MBps) [2024-10-13T17:49:11.337Z] Copying: 76/1024 [MB] (20 MBps) [2024-10-13T17:49:11.909Z] Copying: 92/1024 [MB] (16 MBps) [2024-10-13T17:49:13.296Z] Copying: 116/1024 [MB] (24 MBps) [2024-10-13T17:49:14.240Z] Copying: 131/1024 [MB] (14 MBps) [2024-10-13T17:49:15.184Z] Copying: 152/1024 [MB] (20 MBps) [2024-10-13T17:49:16.130Z] Copying: 167/1024 [MB] (14 MBps) [2024-10-13T17:49:17.074Z] Copying: 185/1024 [MB] (18 MBps) [2024-10-13T17:49:18.018Z] Copying: 207/1024 [MB] (21 MBps) [2024-10-13T17:49:18.960Z] Copying: 227/1024 [MB] (20 MBps) [2024-10-13T17:49:19.905Z] Copying: 248/1024 [MB] (20 MBps) [2024-10-13T17:49:21.293Z] Copying: 268/1024 [MB] (19 MBps) [2024-10-13T17:49:22.237Z] Copying: 283/1024 [MB] (14 MBps) [2024-10-13T17:49:23.182Z] Copying: 301/1024 [MB] (18 MBps) [2024-10-13T17:49:24.128Z] Copying: 316/1024 [MB] (14 MBps) [2024-10-13T17:49:25.071Z] Copying: 329/1024 [MB] (13 MBps) [2024-10-13T17:49:26.014Z] Copying: 349/1024 [MB] (19 MBps) [2024-10-13T17:49:26.958Z] Copying: 372/1024 [MB] (23 MBps) [2024-10-13T17:49:27.901Z] Copying: 385/1024 [MB] (13 MBps) [2024-10-13T17:49:29.300Z] Copying: 399/1024 [MB] (14 MBps) [2024-10-13T17:49:30.245Z] Copying: 417/1024 [MB] (17 MBps) [2024-10-13T17:49:31.190Z] Copying: 434/1024 [MB] (17 MBps) [2024-10-13T17:49:32.201Z] Copying: 452/1024 [MB] (18 MBps) [2024-10-13T17:49:33.146Z] Copying: 463/1024 [MB] (11 MBps) [2024-10-13T17:49:34.090Z] Copying: 484/1024 [MB] (20 MBps) [2024-10-13T17:49:35.032Z] Copying: 499/1024 [MB] (15 MBps) [2024-10-13T17:49:35.976Z] Copying: 520/1024 [MB] (21 MBps) [2024-10-13T17:49:36.920Z] Copying: 537/1024 [MB] (17 MBps) [2024-10-13T17:49:38.306Z] Copying: 555/1024 [MB] (18 MBps) [2024-10-13T17:49:39.250Z] Copying: 573/1024 [MB] (17 MBps) [2024-10-13T17:49:40.195Z] Copying: 590/1024 [MB] (17 MBps) [2024-10-13T17:49:41.139Z] Copying: 612/1024 [MB] (21 MBps) [2024-10-13T17:49:42.083Z] Copying: 625/1024 [MB] (13 MBps) [2024-10-13T17:49:43.027Z] Copying: 639/1024 [MB] (13 MBps) [2024-10-13T17:49:43.971Z] Copying: 651/1024 [MB] (12 MBps) [2024-10-13T17:49:44.920Z] Copying: 662/1024 [MB] (11 MBps) [2024-10-13T17:49:46.312Z] Copying: 679/1024 [MB] (16 MBps) [2024-10-13T17:49:47.255Z] Copying: 698/1024 [MB] (18 MBps) [2024-10-13T17:49:48.199Z] Copying: 716/1024 [MB] (18 MBps) [2024-10-13T17:49:49.141Z] Copying: 734/1024 [MB] (18 MBps) [2024-10-13T17:49:50.085Z] Copying: 752/1024 [MB] (18 MBps) [2024-10-13T17:49:51.039Z] Copying: 770/1024 [MB] (18 MBps) [2024-10-13T17:49:52.049Z] Copying: 787/1024 [MB] (16 MBps) [2024-10-13T17:49:52.993Z] Copying: 804/1024 [MB] (16 MBps) [2024-10-13T17:49:53.938Z] Copying: 820/1024 [MB] (15 MBps) [2024-10-13T17:49:55.324Z] Copying: 842/1024 [MB] (21 MBps) [2024-10-13T17:49:56.269Z] Copying: 861/1024 [MB] (19 MBps) [2024-10-13T17:49:57.213Z] Copying: 876/1024 [MB] (14 MBps) [2024-10-13T17:49:58.161Z] Copying: 888/1024 [MB] (12 MBps) [2024-10-13T17:49:59.105Z] Copying: 906/1024 [MB] (17 MBps) [2024-10-13T17:50:00.049Z] Copying: 931/1024 [MB] (25 MBps) [2024-10-13T17:50:00.994Z] Copying: 952/1024 [MB] (20 MBps) [2024-10-13T17:50:01.937Z] Copying: 969/1024 [MB] (17 MBps) [2024-10-13T17:50:03.324Z] Copying: 990/1024 [MB] (21 MBps) [2024-10-13T17:50:04.268Z] Copying: 1004/1024 [MB] (14 MBps) [2024-10-13T17:50:04.528Z] Copying: 1016/1024 [MB] (12 MBps) [2024-10-13T17:50:04.789Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-10-13 17:50:04.764310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.975 [2024-10-13 17:50:04.764405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:14.975 [2024-10-13 17:50:04.764425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:14.975 [2024-10-13 17:50:04.764436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.975 [2024-10-13 17:50:04.764464] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:14.975 [2024-10-13 17:50:04.767772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.976 [2024-10-13 17:50:04.767823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:14.976 [2024-10-13 17:50:04.767836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.288 ms 00:20:14.976 [2024-10-13 17:50:04.767846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.976 [2024-10-13 17:50:04.768131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.976 [2024-10-13 17:50:04.768145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:14.976 [2024-10-13 17:50:04.768156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:20:14.976 [2024-10-13 17:50:04.768167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.976 [2024-10-13 17:50:04.772310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.976 [2024-10-13 17:50:04.772350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:14.976 [2024-10-13 17:50:04.772361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.127 ms 00:20:14.976 [2024-10-13 17:50:04.772371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.976 [2024-10-13 17:50:04.778650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.976 [2024-10-13 17:50:04.778710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:14.976 [2024-10-13 17:50:04.778723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.256 ms 00:20:14.976 [2024-10-13 17:50:04.778732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.238 [2024-10-13 17:50:04.809354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.238 [2024-10-13 17:50:04.809420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:15.238 [2024-10-13 17:50:04.809436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.537 ms 00:20:15.238 [2024-10-13 17:50:04.809445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.238 [2024-10-13 17:50:04.827503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.238 [2024-10-13 17:50:04.827580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:15.238 [2024-10-13 17:50:04.827596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.999 ms 00:20:15.238 [2024-10-13 17:50:04.827607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.238 [2024-10-13 17:50:04.827766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.238 [2024-10-13 17:50:04.827779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:15.238 [2024-10-13 17:50:04.827799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:20:15.238 [2024-10-13 17:50:04.827808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.238 [2024-10-13 17:50:04.854740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.238 [2024-10-13 17:50:04.854796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:15.238 [2024-10-13 17:50:04.854810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.914 ms 00:20:15.238 [2024-10-13 17:50:04.854818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.238 [2024-10-13 17:50:04.880893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.238 [2024-10-13 17:50:04.880960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:15.238 [2024-10-13 17:50:04.880974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.023 ms 00:20:15.238 [2024-10-13 17:50:04.880982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.238 [2024-10-13 17:50:04.906779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.238 [2024-10-13 17:50:04.906833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:15.238 [2024-10-13 17:50:04.906846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.746 ms 00:20:15.238 [2024-10-13 17:50:04.906854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.238 [2024-10-13 17:50:04.932205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.238 [2024-10-13 17:50:04.932259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:15.238 [2024-10-13 17:50:04.932272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.243 ms 00:20:15.238 [2024-10-13 17:50:04.932279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.238 [2024-10-13 17:50:04.932331] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:15.238 [2024-10-13 17:50:04.932349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:15.238 [2024-10-13 17:50:04.932361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:15.238 [2024-10-13 17:50:04.932370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:15.238 [2024-10-13 17:50:04.932379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:15.238 [2024-10-13 17:50:04.932388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:15.238 [2024-10-13 17:50:04.932398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:15.238 [2024-10-13 17:50:04.932410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:15.238 [2024-10-13 17:50:04.932420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:15.238 [2024-10-13 17:50:04.932429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:15.238 [2024-10-13 17:50:04.932437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:15.238 [2024-10-13 17:50:04.932446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:15.238 [2024-10-13 17:50:04.932454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:15.238 [2024-10-13 17:50:04.932462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:15.238 [2024-10-13 17:50:04.932470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:15.238 [2024-10-13 17:50:04.932478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:15.238 [2024-10-13 17:50:04.932487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:15.238 [2024-10-13 17:50:04.932495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:15.238 [2024-10-13 17:50:04.932503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:15.238 [2024-10-13 17:50:04.932511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:15.238 [2024-10-13 17:50:04.932519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:15.238 [2024-10-13 17:50:04.932527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:15.238 [2024-10-13 17:50:04.932535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:15.238 [2024-10-13 17:50:04.932542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:15.238 [2024-10-13 17:50:04.932550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.932992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.933000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.933008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.933016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.933024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.933033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.933041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.933049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.933057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.933066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.933074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.933083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.933091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.933099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.933107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.933114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.933122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.933129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.933138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.933145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.933154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.933164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.933172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.933180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.933188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.933197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.933205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.933214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:15.239 [2024-10-13 17:50:04.933231] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:15.239 [2024-10-13 17:50:04.933246] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 90a54ddc-e2a7-4828-bef8-d66a300cc8d9 00:20:15.239 [2024-10-13 17:50:04.933255] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:15.239 [2024-10-13 17:50:04.933268] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:15.239 [2024-10-13 17:50:04.933277] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:15.239 [2024-10-13 17:50:04.933287] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:15.239 [2024-10-13 17:50:04.933295] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:15.239 [2024-10-13 17:50:04.933304] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:15.239 [2024-10-13 17:50:04.933322] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:15.239 [2024-10-13 17:50:04.933329] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:15.239 [2024-10-13 17:50:04.933336] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:15.239 [2024-10-13 17:50:04.933346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.239 [2024-10-13 17:50:04.933355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:15.239 [2024-10-13 17:50:04.933364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.016 ms 00:20:15.239 [2024-10-13 17:50:04.933372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.239 [2024-10-13 17:50:04.947929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.239 [2024-10-13 17:50:04.947991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:15.239 [2024-10-13 17:50:04.948005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.536 ms 00:20:15.239 [2024-10-13 17:50:04.948014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.239 [2024-10-13 17:50:04.948458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.239 [2024-10-13 17:50:04.948480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:15.239 [2024-10-13 17:50:04.948490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:20:15.239 [2024-10-13 17:50:04.948498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.240 [2024-10-13 17:50:04.988401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.240 [2024-10-13 17:50:04.988458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:15.240 [2024-10-13 17:50:04.988472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.240 [2024-10-13 17:50:04.988482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.240 [2024-10-13 17:50:04.988576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.240 [2024-10-13 17:50:04.988588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:15.240 [2024-10-13 17:50:04.988598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.240 [2024-10-13 17:50:04.988608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.240 [2024-10-13 17:50:04.988715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.240 [2024-10-13 17:50:04.988727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:15.240 [2024-10-13 17:50:04.988737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.240 [2024-10-13 17:50:04.988746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.240 [2024-10-13 17:50:04.988763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.240 [2024-10-13 17:50:04.988772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:15.240 [2024-10-13 17:50:04.988781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.240 [2024-10-13 17:50:04.988789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.501 [2024-10-13 17:50:05.080767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.501 [2024-10-13 17:50:05.080836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:15.501 [2024-10-13 17:50:05.080851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.501 [2024-10-13 17:50:05.080861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.501 [2024-10-13 17:50:05.155245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.501 [2024-10-13 17:50:05.155319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:15.501 [2024-10-13 17:50:05.155332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.501 [2024-10-13 17:50:05.155342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.501 [2024-10-13 17:50:05.155416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.501 [2024-10-13 17:50:05.155434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:15.501 [2024-10-13 17:50:05.155446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.501 [2024-10-13 17:50:05.155455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.501 [2024-10-13 17:50:05.155537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.501 [2024-10-13 17:50:05.155549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:15.501 [2024-10-13 17:50:05.155582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.501 [2024-10-13 17:50:05.155592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.501 [2024-10-13 17:50:05.155704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.501 [2024-10-13 17:50:05.155718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:15.501 [2024-10-13 17:50:05.155728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.501 [2024-10-13 17:50:05.155736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.501 [2024-10-13 17:50:05.155776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.501 [2024-10-13 17:50:05.155787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:15.501 [2024-10-13 17:50:05.155797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.501 [2024-10-13 17:50:05.155807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.501 [2024-10-13 17:50:05.155862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.501 [2024-10-13 17:50:05.155872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:15.501 [2024-10-13 17:50:05.155884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.501 [2024-10-13 17:50:05.155895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.501 [2024-10-13 17:50:05.155954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.501 [2024-10-13 17:50:05.155966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:15.501 [2024-10-13 17:50:05.156001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.501 [2024-10-13 17:50:05.156010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.501 [2024-10-13 17:50:05.156174] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 391.823 ms, result 0 00:20:16.444 00:20:16.444 00:20:16.444 17:50:05 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:18.357 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:18.357 17:50:08 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:18.357 [2024-10-13 17:50:08.144080] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:20:18.357 [2024-10-13 17:50:08.144221] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76589 ] 00:20:18.618 [2024-10-13 17:50:08.296850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.879 [2024-10-13 17:50:08.440757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.140 [2024-10-13 17:50:08.774910] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:19.140 [2024-10-13 17:50:08.775001] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:19.140 [2024-10-13 17:50:08.939817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.140 [2024-10-13 17:50:08.939891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:19.141 [2024-10-13 17:50:08.939908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:19.141 [2024-10-13 17:50:08.939923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.141 [2024-10-13 17:50:08.939995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.141 [2024-10-13 17:50:08.940006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:19.141 [2024-10-13 17:50:08.940016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:19.141 [2024-10-13 17:50:08.940029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.141 [2024-10-13 17:50:08.940052] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:19.141 [2024-10-13 17:50:08.940928] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:19.141 [2024-10-13 17:50:08.940988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.141 [2024-10-13 17:50:08.941000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:19.141 [2024-10-13 17:50:08.941011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.942 ms 00:20:19.141 [2024-10-13 17:50:08.941020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.141 [2024-10-13 17:50:08.943352] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:19.403 [2024-10-13 17:50:08.958828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.403 [2024-10-13 17:50:08.958883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:19.403 [2024-10-13 17:50:08.958898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.478 ms 00:20:19.403 [2024-10-13 17:50:08.958908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.403 [2024-10-13 17:50:08.958999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.403 [2024-10-13 17:50:08.959010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:19.403 [2024-10-13 17:50:08.959026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:19.403 [2024-10-13 17:50:08.959035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.403 [2024-10-13 17:50:08.970853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.403 [2024-10-13 17:50:08.970905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:19.403 [2024-10-13 17:50:08.970918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.732 ms 00:20:19.403 [2024-10-13 17:50:08.970926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.403 [2024-10-13 17:50:08.971023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.403 [2024-10-13 17:50:08.971033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:19.403 [2024-10-13 17:50:08.971042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:20:19.403 [2024-10-13 17:50:08.971050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.403 [2024-10-13 17:50:08.971113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.403 [2024-10-13 17:50:08.971125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:19.403 [2024-10-13 17:50:08.971134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:19.404 [2024-10-13 17:50:08.971142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.404 [2024-10-13 17:50:08.971167] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:19.404 [2024-10-13 17:50:08.975843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.404 [2024-10-13 17:50:08.975891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:19.404 [2024-10-13 17:50:08.975903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.683 ms 00:20:19.404 [2024-10-13 17:50:08.975912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.404 [2024-10-13 17:50:08.975956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.404 [2024-10-13 17:50:08.975980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:19.404 [2024-10-13 17:50:08.975990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:19.404 [2024-10-13 17:50:08.975999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.404 [2024-10-13 17:50:08.976039] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:19.404 [2024-10-13 17:50:08.976067] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:19.404 [2024-10-13 17:50:08.976109] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:19.404 [2024-10-13 17:50:08.976133] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:19.404 [2024-10-13 17:50:08.976246] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:19.404 [2024-10-13 17:50:08.976258] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:19.404 [2024-10-13 17:50:08.976271] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:19.404 [2024-10-13 17:50:08.976283] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:19.404 [2024-10-13 17:50:08.976293] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:19.404 [2024-10-13 17:50:08.976301] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:19.404 [2024-10-13 17:50:08.976310] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:19.404 [2024-10-13 17:50:08.976320] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:19.404 [2024-10-13 17:50:08.976329] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:19.404 [2024-10-13 17:50:08.976338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.404 [2024-10-13 17:50:08.976350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:19.404 [2024-10-13 17:50:08.976359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:20:19.404 [2024-10-13 17:50:08.976366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.404 [2024-10-13 17:50:08.976450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.404 [2024-10-13 17:50:08.976459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:19.404 [2024-10-13 17:50:08.976467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:19.404 [2024-10-13 17:50:08.976475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.404 [2024-10-13 17:50:08.976601] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:19.404 [2024-10-13 17:50:08.976613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:19.404 [2024-10-13 17:50:08.976626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:19.404 [2024-10-13 17:50:08.976634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.404 [2024-10-13 17:50:08.976643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:19.404 [2024-10-13 17:50:08.976651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:19.404 [2024-10-13 17:50:08.976659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:19.404 [2024-10-13 17:50:08.976667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:19.404 [2024-10-13 17:50:08.976675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:19.404 [2024-10-13 17:50:08.976682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:19.404 [2024-10-13 17:50:08.976689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:19.404 [2024-10-13 17:50:08.976696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:19.404 [2024-10-13 17:50:08.976703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:19.404 [2024-10-13 17:50:08.976715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:19.404 [2024-10-13 17:50:08.976723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:19.404 [2024-10-13 17:50:08.976738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.404 [2024-10-13 17:50:08.976746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:19.404 [2024-10-13 17:50:08.976753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:19.404 [2024-10-13 17:50:08.976760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.404 [2024-10-13 17:50:08.976767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:19.404 [2024-10-13 17:50:08.976774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:19.404 [2024-10-13 17:50:08.976781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.404 [2024-10-13 17:50:08.976788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:19.404 [2024-10-13 17:50:08.976795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:19.404 [2024-10-13 17:50:08.976802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.404 [2024-10-13 17:50:08.976809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:19.404 [2024-10-13 17:50:08.976816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:19.404 [2024-10-13 17:50:08.976823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.404 [2024-10-13 17:50:08.976830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:19.404 [2024-10-13 17:50:08.976837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:19.404 [2024-10-13 17:50:08.976845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.404 [2024-10-13 17:50:08.976852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:19.404 [2024-10-13 17:50:08.976859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:19.404 [2024-10-13 17:50:08.976866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:19.404 [2024-10-13 17:50:08.976874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:19.404 [2024-10-13 17:50:08.976882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:19.404 [2024-10-13 17:50:08.976889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:19.404 [2024-10-13 17:50:08.976897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:19.404 [2024-10-13 17:50:08.976904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:19.404 [2024-10-13 17:50:08.976911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.404 [2024-10-13 17:50:08.976918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:19.404 [2024-10-13 17:50:08.976925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:19.404 [2024-10-13 17:50:08.976933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.404 [2024-10-13 17:50:08.976940] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:19.404 [2024-10-13 17:50:08.976948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:19.404 [2024-10-13 17:50:08.976958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:19.404 [2024-10-13 17:50:08.976965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.404 [2024-10-13 17:50:08.976974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:19.404 [2024-10-13 17:50:08.976981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:19.404 [2024-10-13 17:50:08.976989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:19.404 [2024-10-13 17:50:08.976997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:19.404 [2024-10-13 17:50:08.977005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:19.404 [2024-10-13 17:50:08.977012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:19.404 [2024-10-13 17:50:08.977021] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:19.404 [2024-10-13 17:50:08.977031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:19.404 [2024-10-13 17:50:08.977041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:19.404 [2024-10-13 17:50:08.977048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:19.404 [2024-10-13 17:50:08.977056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:19.404 [2024-10-13 17:50:08.977063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:19.404 [2024-10-13 17:50:08.977073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:19.404 [2024-10-13 17:50:08.977080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:19.404 [2024-10-13 17:50:08.977088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:19.404 [2024-10-13 17:50:08.977095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:19.404 [2024-10-13 17:50:08.977103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:19.404 [2024-10-13 17:50:08.977109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:19.404 [2024-10-13 17:50:08.977117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:19.404 [2024-10-13 17:50:08.977125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:19.404 [2024-10-13 17:50:08.977133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:19.404 [2024-10-13 17:50:08.977140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:19.404 [2024-10-13 17:50:08.977148] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:19.404 [2024-10-13 17:50:08.977157] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:19.405 [2024-10-13 17:50:08.977169] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:19.405 [2024-10-13 17:50:08.977177] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:19.405 [2024-10-13 17:50:08.977185] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:19.405 [2024-10-13 17:50:08.977192] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:19.405 [2024-10-13 17:50:08.977199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.405 [2024-10-13 17:50:08.977208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:19.405 [2024-10-13 17:50:08.977218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.687 ms 00:20:19.405 [2024-10-13 17:50:08.977227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.405 [2024-10-13 17:50:09.016074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.405 [2024-10-13 17:50:09.016129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:19.405 [2024-10-13 17:50:09.016144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.800 ms 00:20:19.405 [2024-10-13 17:50:09.016153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.405 [2024-10-13 17:50:09.016244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.405 [2024-10-13 17:50:09.016257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:19.405 [2024-10-13 17:50:09.016267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:20:19.405 [2024-10-13 17:50:09.016275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.405 [2024-10-13 17:50:09.068762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.405 [2024-10-13 17:50:09.068825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:19.405 [2024-10-13 17:50:09.068840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.422 ms 00:20:19.405 [2024-10-13 17:50:09.068850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.405 [2024-10-13 17:50:09.068907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.405 [2024-10-13 17:50:09.068918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:19.405 [2024-10-13 17:50:09.068928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:19.405 [2024-10-13 17:50:09.068937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.405 [2024-10-13 17:50:09.069741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.405 [2024-10-13 17:50:09.069786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:19.405 [2024-10-13 17:50:09.069798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.712 ms 00:20:19.405 [2024-10-13 17:50:09.069807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.405 [2024-10-13 17:50:09.069983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.405 [2024-10-13 17:50:09.069996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:19.405 [2024-10-13 17:50:09.070005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:20:19.405 [2024-10-13 17:50:09.070014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.405 [2024-10-13 17:50:09.088432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.405 [2024-10-13 17:50:09.088484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:19.405 [2024-10-13 17:50:09.088497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.392 ms 00:20:19.405 [2024-10-13 17:50:09.088509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.405 [2024-10-13 17:50:09.104293] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:19.405 [2024-10-13 17:50:09.104351] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:19.405 [2024-10-13 17:50:09.104366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.405 [2024-10-13 17:50:09.104375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:19.405 [2024-10-13 17:50:09.104386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.699 ms 00:20:19.405 [2024-10-13 17:50:09.104394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.405 [2024-10-13 17:50:09.131257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.405 [2024-10-13 17:50:09.131331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:19.405 [2024-10-13 17:50:09.131355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.802 ms 00:20:19.405 [2024-10-13 17:50:09.131363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.405 [2024-10-13 17:50:09.144750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.405 [2024-10-13 17:50:09.144806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:19.405 [2024-10-13 17:50:09.144818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.318 ms 00:20:19.405 [2024-10-13 17:50:09.144826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.405 [2024-10-13 17:50:09.157663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.405 [2024-10-13 17:50:09.157714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:19.405 [2024-10-13 17:50:09.157726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.785 ms 00:20:19.405 [2024-10-13 17:50:09.157734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.405 [2024-10-13 17:50:09.158403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.405 [2024-10-13 17:50:09.158436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:19.405 [2024-10-13 17:50:09.158448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:20:19.405 [2024-10-13 17:50:09.158457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.666 [2024-10-13 17:50:09.232852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.666 [2024-10-13 17:50:09.232922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:19.666 [2024-10-13 17:50:09.232939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.372 ms 00:20:19.666 [2024-10-13 17:50:09.232956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.666 [2024-10-13 17:50:09.245414] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:19.666 [2024-10-13 17:50:09.249197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.666 [2024-10-13 17:50:09.249246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:19.666 [2024-10-13 17:50:09.249258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.177 ms 00:20:19.666 [2024-10-13 17:50:09.249268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.666 [2024-10-13 17:50:09.249368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.666 [2024-10-13 17:50:09.249381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:19.666 [2024-10-13 17:50:09.249392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:19.666 [2024-10-13 17:50:09.249403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.666 [2024-10-13 17:50:09.249492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.666 [2024-10-13 17:50:09.249505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:19.666 [2024-10-13 17:50:09.249514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:19.666 [2024-10-13 17:50:09.249522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.666 [2024-10-13 17:50:09.249551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.666 [2024-10-13 17:50:09.249580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:19.666 [2024-10-13 17:50:09.249591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:19.667 [2024-10-13 17:50:09.249600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.667 [2024-10-13 17:50:09.249639] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:19.667 [2024-10-13 17:50:09.249651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.667 [2024-10-13 17:50:09.249664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:19.667 [2024-10-13 17:50:09.249672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:19.667 [2024-10-13 17:50:09.249680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.667 [2024-10-13 17:50:09.277711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.667 [2024-10-13 17:50:09.277769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:19.667 [2024-10-13 17:50:09.277783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.005 ms 00:20:19.667 [2024-10-13 17:50:09.277793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.667 [2024-10-13 17:50:09.277906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.667 [2024-10-13 17:50:09.277918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:19.667 [2024-10-13 17:50:09.277929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:19.667 [2024-10-13 17:50:09.277938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.667 [2024-10-13 17:50:09.279473] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 339.089 ms, result 0 00:20:20.612  [2024-10-13T17:50:11.411Z] Copying: 13/1024 [MB] (13 MBps) [2024-10-13T17:50:12.354Z] Copying: 25/1024 [MB] (12 MBps) [2024-10-13T17:50:13.299Z] Copying: 43/1024 [MB] (17 MBps) [2024-10-13T17:50:14.686Z] Copying: 53/1024 [MB] (10 MBps) [2024-10-13T17:50:15.630Z] Copying: 70/1024 [MB] (16 MBps) [2024-10-13T17:50:16.574Z] Copying: 85/1024 [MB] (15 MBps) [2024-10-13T17:50:17.518Z] Copying: 107/1024 [MB] (22 MBps) [2024-10-13T17:50:18.466Z] Copying: 132/1024 [MB] (24 MBps) [2024-10-13T17:50:19.411Z] Copying: 143/1024 [MB] (11 MBps) [2024-10-13T17:50:20.356Z] Copying: 161/1024 [MB] (17 MBps) [2024-10-13T17:50:21.298Z] Copying: 175/1024 [MB] (13 MBps) [2024-10-13T17:50:22.684Z] Copying: 201/1024 [MB] (25 MBps) [2024-10-13T17:50:23.628Z] Copying: 212/1024 [MB] (11 MBps) [2024-10-13T17:50:24.572Z] Copying: 229/1024 [MB] (16 MBps) [2024-10-13T17:50:25.517Z] Copying: 244/1024 [MB] (15 MBps) [2024-10-13T17:50:26.460Z] Copying: 262/1024 [MB] (17 MBps) [2024-10-13T17:50:27.403Z] Copying: 282/1024 [MB] (20 MBps) [2024-10-13T17:50:28.348Z] Copying: 305/1024 [MB] (22 MBps) [2024-10-13T17:50:29.292Z] Copying: 320/1024 [MB] (15 MBps) [2024-10-13T17:50:30.695Z] Copying: 334/1024 [MB] (13 MBps) [2024-10-13T17:50:31.675Z] Copying: 346/1024 [MB] (12 MBps) [2024-10-13T17:50:32.620Z] Copying: 368/1024 [MB] (21 MBps) [2024-10-13T17:50:33.564Z] Copying: 383/1024 [MB] (15 MBps) [2024-10-13T17:50:34.508Z] Copying: 406/1024 [MB] (22 MBps) [2024-10-13T17:50:35.452Z] Copying: 426/1024 [MB] (20 MBps) [2024-10-13T17:50:36.395Z] Copying: 448/1024 [MB] (22 MBps) [2024-10-13T17:50:37.339Z] Copying: 465/1024 [MB] (17 MBps) [2024-10-13T17:50:38.727Z] Copying: 491/1024 [MB] (25 MBps) [2024-10-13T17:50:39.300Z] Copying: 503/1024 [MB] (12 MBps) [2024-10-13T17:50:40.686Z] Copying: 527/1024 [MB] (23 MBps) [2024-10-13T17:50:41.630Z] Copying: 543/1024 [MB] (15 MBps) [2024-10-13T17:50:42.574Z] Copying: 565/1024 [MB] (22 MBps) [2024-10-13T17:50:43.518Z] Copying: 581/1024 [MB] (15 MBps) [2024-10-13T17:50:44.482Z] Copying: 596/1024 [MB] (15 MBps) [2024-10-13T17:50:45.426Z] Copying: 607/1024 [MB] (10 MBps) [2024-10-13T17:50:46.370Z] Copying: 628/1024 [MB] (21 MBps) [2024-10-13T17:50:47.313Z] Copying: 644/1024 [MB] (15 MBps) [2024-10-13T17:50:48.696Z] Copying: 670/1024 [MB] (25 MBps) [2024-10-13T17:50:49.640Z] Copying: 681/1024 [MB] (10 MBps) [2024-10-13T17:50:50.584Z] Copying: 691/1024 [MB] (10 MBps) [2024-10-13T17:50:51.540Z] Copying: 704/1024 [MB] (12 MBps) [2024-10-13T17:50:52.514Z] Copying: 721/1024 [MB] (17 MBps) [2024-10-13T17:50:53.460Z] Copying: 734/1024 [MB] (13 MBps) [2024-10-13T17:50:54.406Z] Copying: 747/1024 [MB] (13 MBps) [2024-10-13T17:50:55.350Z] Copying: 760/1024 [MB] (12 MBps) [2024-10-13T17:50:56.293Z] Copying: 776/1024 [MB] (16 MBps) [2024-10-13T17:50:57.680Z] Copying: 793/1024 [MB] (17 MBps) [2024-10-13T17:50:58.622Z] Copying: 813/1024 [MB] (20 MBps) [2024-10-13T17:50:59.565Z] Copying: 832/1024 [MB] (18 MBps) [2024-10-13T17:51:00.507Z] Copying: 845/1024 [MB] (13 MBps) [2024-10-13T17:51:01.451Z] Copying: 859/1024 [MB] (13 MBps) [2024-10-13T17:51:02.393Z] Copying: 872/1024 [MB] (13 MBps) [2024-10-13T17:51:03.335Z] Copying: 890/1024 [MB] (17 MBps) [2024-10-13T17:51:04.722Z] Copying: 906/1024 [MB] (16 MBps) [2024-10-13T17:51:05.295Z] Copying: 921/1024 [MB] (14 MBps) [2024-10-13T17:51:06.681Z] Copying: 933/1024 [MB] (12 MBps) [2024-10-13T17:51:07.625Z] Copying: 948/1024 [MB] (15 MBps) [2024-10-13T17:51:08.568Z] Copying: 967/1024 [MB] (18 MBps) [2024-10-13T17:51:09.511Z] Copying: 981/1024 [MB] (14 MBps) [2024-10-13T17:51:10.455Z] Copying: 994/1024 [MB] (12 MBps) [2024-10-13T17:51:11.418Z] Copying: 1009/1024 [MB] (14 MBps) [2024-10-13T17:51:12.422Z] Copying: 1023/1024 [MB] (14 MBps) [2024-10-13T17:51:12.422Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-10-13 17:51:12.234621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.608 [2024-10-13 17:51:12.234693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:22.608 [2024-10-13 17:51:12.234714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:22.608 [2024-10-13 17:51:12.234725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.608 [2024-10-13 17:51:12.238212] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:22.608 [2024-10-13 17:51:12.243189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.608 [2024-10-13 17:51:12.243238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:22.608 [2024-10-13 17:51:12.243253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.918 ms 00:21:22.608 [2024-10-13 17:51:12.243263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.608 [2024-10-13 17:51:12.256352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.608 [2024-10-13 17:51:12.256405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:22.608 [2024-10-13 17:51:12.256420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.794 ms 00:21:22.608 [2024-10-13 17:51:12.256430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.608 [2024-10-13 17:51:12.285505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.608 [2024-10-13 17:51:12.285576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:22.608 [2024-10-13 17:51:12.285591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.054 ms 00:21:22.608 [2024-10-13 17:51:12.285600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.608 [2024-10-13 17:51:12.291861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.608 [2024-10-13 17:51:12.291935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:22.608 [2024-10-13 17:51:12.291949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.189 ms 00:21:22.609 [2024-10-13 17:51:12.291958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.609 [2024-10-13 17:51:12.320674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.609 [2024-10-13 17:51:12.320729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:22.609 [2024-10-13 17:51:12.320744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.645 ms 00:21:22.609 [2024-10-13 17:51:12.320754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.609 [2024-10-13 17:51:12.338060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.609 [2024-10-13 17:51:12.338111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:22.609 [2024-10-13 17:51:12.338134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.175 ms 00:21:22.609 [2024-10-13 17:51:12.338143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.870 [2024-10-13 17:51:12.494017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.870 [2024-10-13 17:51:12.494079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:22.870 [2024-10-13 17:51:12.494096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 155.812 ms 00:21:22.870 [2024-10-13 17:51:12.494106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.870 [2024-10-13 17:51:12.519837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.870 [2024-10-13 17:51:12.519888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:22.870 [2024-10-13 17:51:12.519903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.713 ms 00:21:22.870 [2024-10-13 17:51:12.519919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.870 [2024-10-13 17:51:12.544450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.870 [2024-10-13 17:51:12.544505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:22.870 [2024-10-13 17:51:12.544518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.484 ms 00:21:22.870 [2024-10-13 17:51:12.544526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.870 [2024-10-13 17:51:12.568912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.870 [2024-10-13 17:51:12.568953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:22.870 [2024-10-13 17:51:12.568966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.061 ms 00:21:22.870 [2024-10-13 17:51:12.568973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.870 [2024-10-13 17:51:12.592166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.870 [2024-10-13 17:51:12.592205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:22.870 [2024-10-13 17:51:12.592216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.126 ms 00:21:22.870 [2024-10-13 17:51:12.592223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.870 [2024-10-13 17:51:12.592303] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:22.870 [2024-10-13 17:51:12.592323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 98560 / 261120 wr_cnt: 1 state: open 00:21:22.870 [2024-10-13 17:51:12.592334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:22.870 [2024-10-13 17:51:12.592342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:22.870 [2024-10-13 17:51:12.592351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:22.870 [2024-10-13 17:51:12.592359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:22.870 [2024-10-13 17:51:12.592368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:22.870 [2024-10-13 17:51:12.592376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:22.870 [2024-10-13 17:51:12.592384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:22.870 [2024-10-13 17:51:12.592392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:22.870 [2024-10-13 17:51:12.592401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:22.870 [2024-10-13 17:51:12.592408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:22.870 [2024-10-13 17:51:12.592416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:22.870 [2024-10-13 17:51:12.592423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:22.870 [2024-10-13 17:51:12.592431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:22.870 [2024-10-13 17:51:12.592438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:22.870 [2024-10-13 17:51:12.592445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:22.870 [2024-10-13 17:51:12.592452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:22.870 [2024-10-13 17:51:12.592459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:22.870 [2024-10-13 17:51:12.592467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:22.870 [2024-10-13 17:51:12.592474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:22.870 [2024-10-13 17:51:12.592481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:22.870 [2024-10-13 17:51:12.592488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:22.870 [2024-10-13 17:51:12.592495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:22.870 [2024-10-13 17:51:12.592502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:22.870 [2024-10-13 17:51:12.592510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:22.870 [2024-10-13 17:51:12.592517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:22.870 [2024-10-13 17:51:12.592524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:22.870 [2024-10-13 17:51:12.592532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:22.870 [2024-10-13 17:51:12.592540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:22.870 [2024-10-13 17:51:12.592547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:22.870 [2024-10-13 17:51:12.592577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.592992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.593000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.593007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.593015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.593022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.593029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.593037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.593044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.593052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.593059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.593067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.593074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.593082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.593091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.593099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.593108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.593117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.593124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.593132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.593140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.593147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:22.871 [2024-10-13 17:51:12.593163] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:22.871 [2024-10-13 17:51:12.593172] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 90a54ddc-e2a7-4828-bef8-d66a300cc8d9 00:21:22.871 [2024-10-13 17:51:12.593180] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 98560 00:21:22.871 [2024-10-13 17:51:12.593188] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 99520 00:21:22.871 [2024-10-13 17:51:12.593195] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 98560 00:21:22.871 [2024-10-13 17:51:12.593205] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0097 00:21:22.871 [2024-10-13 17:51:12.593212] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:22.871 [2024-10-13 17:51:12.593221] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:22.871 [2024-10-13 17:51:12.593239] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:22.871 [2024-10-13 17:51:12.593246] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:22.871 [2024-10-13 17:51:12.593253] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:22.871 [2024-10-13 17:51:12.593261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.871 [2024-10-13 17:51:12.593272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:22.871 [2024-10-13 17:51:12.593282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.960 ms 00:21:22.871 [2024-10-13 17:51:12.593289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.871 [2024-10-13 17:51:12.607169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.871 [2024-10-13 17:51:12.607206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:22.871 [2024-10-13 17:51:12.607217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.863 ms 00:21:22.871 [2024-10-13 17:51:12.607226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.871 [2024-10-13 17:51:12.607665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.871 [2024-10-13 17:51:12.607681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:22.871 [2024-10-13 17:51:12.607692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:21:22.871 [2024-10-13 17:51:12.607700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.871 [2024-10-13 17:51:12.645372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.871 [2024-10-13 17:51:12.645416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:22.871 [2024-10-13 17:51:12.645429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.871 [2024-10-13 17:51:12.645444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.871 [2024-10-13 17:51:12.645516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.871 [2024-10-13 17:51:12.645527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:22.871 [2024-10-13 17:51:12.645537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.871 [2024-10-13 17:51:12.645546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.871 [2024-10-13 17:51:12.645651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.871 [2024-10-13 17:51:12.645663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:22.872 [2024-10-13 17:51:12.645674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.872 [2024-10-13 17:51:12.645684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.872 [2024-10-13 17:51:12.645706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.872 [2024-10-13 17:51:12.645716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:22.872 [2024-10-13 17:51:12.645725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.872 [2024-10-13 17:51:12.645734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.133 [2024-10-13 17:51:12.736705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.133 [2024-10-13 17:51:12.736768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:23.133 [2024-10-13 17:51:12.736784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.133 [2024-10-13 17:51:12.736800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.133 [2024-10-13 17:51:12.810953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.133 [2024-10-13 17:51:12.811016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:23.133 [2024-10-13 17:51:12.811030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.133 [2024-10-13 17:51:12.811040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.133 [2024-10-13 17:51:12.811118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.133 [2024-10-13 17:51:12.811134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:23.133 [2024-10-13 17:51:12.811144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.133 [2024-10-13 17:51:12.811154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.134 [2024-10-13 17:51:12.811224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.134 [2024-10-13 17:51:12.811241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:23.134 [2024-10-13 17:51:12.811250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.134 [2024-10-13 17:51:12.811259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.134 [2024-10-13 17:51:12.811363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.134 [2024-10-13 17:51:12.811380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:23.134 [2024-10-13 17:51:12.811390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.134 [2024-10-13 17:51:12.811398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.134 [2024-10-13 17:51:12.811435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.134 [2024-10-13 17:51:12.811453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:23.134 [2024-10-13 17:51:12.811463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.134 [2024-10-13 17:51:12.811472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.134 [2024-10-13 17:51:12.811525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.134 [2024-10-13 17:51:12.811539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:23.134 [2024-10-13 17:51:12.811549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.134 [2024-10-13 17:51:12.811569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.134 [2024-10-13 17:51:12.811627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.134 [2024-10-13 17:51:12.811646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:23.134 [2024-10-13 17:51:12.811655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.134 [2024-10-13 17:51:12.811664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.134 [2024-10-13 17:51:12.811812] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 577.304 ms, result 0 00:21:24.512 00:21:24.512 00:21:24.512 17:51:14 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:24.512 [2024-10-13 17:51:14.189523] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:21:24.513 [2024-10-13 17:51:14.189703] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77282 ] 00:21:24.774 [2024-10-13 17:51:14.343266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.774 [2024-10-13 17:51:14.464068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.035 [2024-10-13 17:51:14.773459] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:25.035 [2024-10-13 17:51:14.773568] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:25.295 [2024-10-13 17:51:14.937471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.295 [2024-10-13 17:51:14.937542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:25.295 [2024-10-13 17:51:14.937574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:25.295 [2024-10-13 17:51:14.937590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.295 [2024-10-13 17:51:14.937646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.295 [2024-10-13 17:51:14.937657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:25.295 [2024-10-13 17:51:14.937667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:25.295 [2024-10-13 17:51:14.937678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.295 [2024-10-13 17:51:14.937700] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:25.295 [2024-10-13 17:51:14.938399] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:25.295 [2024-10-13 17:51:14.938417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.295 [2024-10-13 17:51:14.938429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:25.295 [2024-10-13 17:51:14.938439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:21:25.295 [2024-10-13 17:51:14.938448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.295 [2024-10-13 17:51:14.940656] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:25.295 [2024-10-13 17:51:14.955367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.296 [2024-10-13 17:51:14.955412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:25.296 [2024-10-13 17:51:14.955427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.713 ms 00:21:25.296 [2024-10-13 17:51:14.955437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.296 [2024-10-13 17:51:14.955520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.296 [2024-10-13 17:51:14.955531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:25.296 [2024-10-13 17:51:14.955544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:21:25.296 [2024-10-13 17:51:14.955553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.296 [2024-10-13 17:51:14.966932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.296 [2024-10-13 17:51:14.966977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:25.296 [2024-10-13 17:51:14.966990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.285 ms 00:21:25.296 [2024-10-13 17:51:14.967000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.296 [2024-10-13 17:51:14.967086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.296 [2024-10-13 17:51:14.967096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:25.296 [2024-10-13 17:51:14.967106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:21:25.296 [2024-10-13 17:51:14.967114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.296 [2024-10-13 17:51:14.967173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.296 [2024-10-13 17:51:14.967184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:25.296 [2024-10-13 17:51:14.967194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:25.296 [2024-10-13 17:51:14.967203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.296 [2024-10-13 17:51:14.967227] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:25.296 [2024-10-13 17:51:14.971707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.296 [2024-10-13 17:51:14.971747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:25.296 [2024-10-13 17:51:14.971758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.486 ms 00:21:25.296 [2024-10-13 17:51:14.971767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.296 [2024-10-13 17:51:14.971808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.296 [2024-10-13 17:51:14.971817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:25.296 [2024-10-13 17:51:14.971826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:25.296 [2024-10-13 17:51:14.971835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.296 [2024-10-13 17:51:14.971874] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:25.296 [2024-10-13 17:51:14.971906] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:25.296 [2024-10-13 17:51:14.971961] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:25.296 [2024-10-13 17:51:14.971983] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:25.296 [2024-10-13 17:51:14.972097] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:25.296 [2024-10-13 17:51:14.972114] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:25.296 [2024-10-13 17:51:14.972127] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:25.296 [2024-10-13 17:51:14.972140] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:25.296 [2024-10-13 17:51:14.972150] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:25.296 [2024-10-13 17:51:14.972160] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:25.296 [2024-10-13 17:51:14.972168] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:25.296 [2024-10-13 17:51:14.972176] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:25.296 [2024-10-13 17:51:14.972185] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:25.296 [2024-10-13 17:51:14.972194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.296 [2024-10-13 17:51:14.972207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:25.296 [2024-10-13 17:51:14.972215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:21:25.296 [2024-10-13 17:51:14.972223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.296 [2024-10-13 17:51:14.972308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.296 [2024-10-13 17:51:14.972317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:25.296 [2024-10-13 17:51:14.972325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:21:25.296 [2024-10-13 17:51:14.972333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.296 [2024-10-13 17:51:14.972441] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:25.296 [2024-10-13 17:51:14.972452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:25.296 [2024-10-13 17:51:14.972464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:25.296 [2024-10-13 17:51:14.972473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:25.296 [2024-10-13 17:51:14.972482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:25.296 [2024-10-13 17:51:14.972489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:25.296 [2024-10-13 17:51:14.972497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:25.296 [2024-10-13 17:51:14.972504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:25.296 [2024-10-13 17:51:14.972512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:25.296 [2024-10-13 17:51:14.972518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:25.296 [2024-10-13 17:51:14.972525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:25.296 [2024-10-13 17:51:14.972532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:25.296 [2024-10-13 17:51:14.972539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:25.296 [2024-10-13 17:51:14.972549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:25.296 [2024-10-13 17:51:14.972574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:25.296 [2024-10-13 17:51:14.972591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:25.296 [2024-10-13 17:51:14.972599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:25.296 [2024-10-13 17:51:14.972607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:25.296 [2024-10-13 17:51:14.972614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:25.296 [2024-10-13 17:51:14.972621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:25.296 [2024-10-13 17:51:14.972630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:25.296 [2024-10-13 17:51:14.972637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:25.296 [2024-10-13 17:51:14.972644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:25.296 [2024-10-13 17:51:14.972651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:25.296 [2024-10-13 17:51:14.972658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:25.296 [2024-10-13 17:51:14.972665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:25.296 [2024-10-13 17:51:14.972672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:25.296 [2024-10-13 17:51:14.972680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:25.296 [2024-10-13 17:51:14.972687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:25.296 [2024-10-13 17:51:14.972694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:25.296 [2024-10-13 17:51:14.972701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:25.296 [2024-10-13 17:51:14.972708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:25.296 [2024-10-13 17:51:14.972715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:25.296 [2024-10-13 17:51:14.972722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:25.296 [2024-10-13 17:51:14.972729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:25.296 [2024-10-13 17:51:14.972736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:25.296 [2024-10-13 17:51:14.972742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:25.296 [2024-10-13 17:51:14.972749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:25.296 [2024-10-13 17:51:14.972756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:25.296 [2024-10-13 17:51:14.972763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:25.296 [2024-10-13 17:51:14.972771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:25.296 [2024-10-13 17:51:14.972778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:25.296 [2024-10-13 17:51:14.972785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:25.296 [2024-10-13 17:51:14.972791] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:25.296 [2024-10-13 17:51:14.972800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:25.296 [2024-10-13 17:51:14.972808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:25.296 [2024-10-13 17:51:14.972817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:25.296 [2024-10-13 17:51:14.972827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:25.296 [2024-10-13 17:51:14.972834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:25.296 [2024-10-13 17:51:14.972841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:25.296 [2024-10-13 17:51:14.972849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:25.296 [2024-10-13 17:51:14.972856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:25.296 [2024-10-13 17:51:14.972863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:25.296 [2024-10-13 17:51:14.972872] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:25.296 [2024-10-13 17:51:14.972883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:25.296 [2024-10-13 17:51:14.972892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:25.296 [2024-10-13 17:51:14.972899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:25.296 [2024-10-13 17:51:14.972906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:25.296 [2024-10-13 17:51:14.972913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:25.296 [2024-10-13 17:51:14.972919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:25.296 [2024-10-13 17:51:14.972926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:25.296 [2024-10-13 17:51:14.972934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:25.296 [2024-10-13 17:51:14.972941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:25.296 [2024-10-13 17:51:14.972948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:25.296 [2024-10-13 17:51:14.972956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:25.296 [2024-10-13 17:51:14.972963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:25.296 [2024-10-13 17:51:14.972970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:25.296 [2024-10-13 17:51:14.972979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:25.296 [2024-10-13 17:51:14.972987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:25.296 [2024-10-13 17:51:14.972995] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:25.296 [2024-10-13 17:51:14.973003] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:25.296 [2024-10-13 17:51:14.973014] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:25.296 [2024-10-13 17:51:14.973021] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:25.296 [2024-10-13 17:51:14.973029] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:25.296 [2024-10-13 17:51:14.973036] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:25.296 [2024-10-13 17:51:14.973043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.296 [2024-10-13 17:51:14.973052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:25.296 [2024-10-13 17:51:14.973061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.673 ms 00:21:25.296 [2024-10-13 17:51:14.973071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.296 [2024-10-13 17:51:15.010912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.296 [2024-10-13 17:51:15.010965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:25.296 [2024-10-13 17:51:15.010977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.794 ms 00:21:25.296 [2024-10-13 17:51:15.010987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.296 [2024-10-13 17:51:15.011082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.296 [2024-10-13 17:51:15.011096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:25.296 [2024-10-13 17:51:15.011106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:21:25.296 [2024-10-13 17:51:15.011115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.296 [2024-10-13 17:51:15.077256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.296 [2024-10-13 17:51:15.077311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:25.296 [2024-10-13 17:51:15.077326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.080 ms 00:21:25.296 [2024-10-13 17:51:15.077335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.296 [2024-10-13 17:51:15.077388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.296 [2024-10-13 17:51:15.077398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:25.296 [2024-10-13 17:51:15.077409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:25.296 [2024-10-13 17:51:15.077417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.296 [2024-10-13 17:51:15.078187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.296 [2024-10-13 17:51:15.078224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:25.296 [2024-10-13 17:51:15.078236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.681 ms 00:21:25.296 [2024-10-13 17:51:15.078245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.296 [2024-10-13 17:51:15.078420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.296 [2024-10-13 17:51:15.078438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:25.296 [2024-10-13 17:51:15.078449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:21:25.296 [2024-10-13 17:51:15.078457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.296 [2024-10-13 17:51:15.096440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.296 [2024-10-13 17:51:15.096484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:25.296 [2024-10-13 17:51:15.096496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.957 ms 00:21:25.296 [2024-10-13 17:51:15.096508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.558 [2024-10-13 17:51:15.111438] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:25.558 [2024-10-13 17:51:15.111487] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:25.558 [2024-10-13 17:51:15.111501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.558 [2024-10-13 17:51:15.111510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:25.558 [2024-10-13 17:51:15.111521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.851 ms 00:21:25.558 [2024-10-13 17:51:15.111529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.558 [2024-10-13 17:51:15.137588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.558 [2024-10-13 17:51:15.137632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:25.558 [2024-10-13 17:51:15.137654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.971 ms 00:21:25.558 [2024-10-13 17:51:15.137663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.558 [2024-10-13 17:51:15.150595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.558 [2024-10-13 17:51:15.150649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:25.558 [2024-10-13 17:51:15.150662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.781 ms 00:21:25.558 [2024-10-13 17:51:15.150671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.558 [2024-10-13 17:51:15.162986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.558 [2024-10-13 17:51:15.163026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:25.558 [2024-10-13 17:51:15.163038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.150 ms 00:21:25.558 [2024-10-13 17:51:15.163046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.558 [2024-10-13 17:51:15.163717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.558 [2024-10-13 17:51:15.163745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:25.558 [2024-10-13 17:51:15.163757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:21:25.558 [2024-10-13 17:51:15.163766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.558 [2024-10-13 17:51:15.235986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.558 [2024-10-13 17:51:15.236057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:25.558 [2024-10-13 17:51:15.236074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.194 ms 00:21:25.558 [2024-10-13 17:51:15.236093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.558 [2024-10-13 17:51:15.247928] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:25.558 [2024-10-13 17:51:15.251903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.558 [2024-10-13 17:51:15.251966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:25.558 [2024-10-13 17:51:15.251981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.749 ms 00:21:25.558 [2024-10-13 17:51:15.251991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.558 [2024-10-13 17:51:15.252091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.558 [2024-10-13 17:51:15.252105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:25.558 [2024-10-13 17:51:15.252117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:21:25.558 [2024-10-13 17:51:15.252126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.558 [2024-10-13 17:51:15.254215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.558 [2024-10-13 17:51:15.254262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:25.558 [2024-10-13 17:51:15.254275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.044 ms 00:21:25.558 [2024-10-13 17:51:15.254284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.558 [2024-10-13 17:51:15.254318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.558 [2024-10-13 17:51:15.254329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:25.558 [2024-10-13 17:51:15.254340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:25.558 [2024-10-13 17:51:15.254350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.558 [2024-10-13 17:51:15.254402] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:25.558 [2024-10-13 17:51:15.254416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.558 [2024-10-13 17:51:15.254429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:25.558 [2024-10-13 17:51:15.254440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:25.558 [2024-10-13 17:51:15.254450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.558 [2024-10-13 17:51:15.280698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.558 [2024-10-13 17:51:15.280741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:25.558 [2024-10-13 17:51:15.280756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.219 ms 00:21:25.558 [2024-10-13 17:51:15.280765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.558 [2024-10-13 17:51:15.280870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.558 [2024-10-13 17:51:15.280881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:25.558 [2024-10-13 17:51:15.280892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:21:25.558 [2024-10-13 17:51:15.280900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.558 [2024-10-13 17:51:15.282386] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 344.359 ms, result 0 00:21:26.942  [2024-10-13T17:51:17.700Z] Copying: 7776/1048576 [kB] (7776 kBps) [2024-10-13T17:51:18.644Z] Copying: 24/1024 [MB] (16 MBps) [2024-10-13T17:51:19.587Z] Copying: 39/1024 [MB] (15 MBps) [2024-10-13T17:51:20.530Z] Copying: 54/1024 [MB] (15 MBps) [2024-10-13T17:51:21.917Z] Copying: 76/1024 [MB] (21 MBps) [2024-10-13T17:51:22.489Z] Copying: 93/1024 [MB] (17 MBps) [2024-10-13T17:51:23.871Z] Copying: 108/1024 [MB] (14 MBps) [2024-10-13T17:51:24.815Z] Copying: 122/1024 [MB] (13 MBps) [2024-10-13T17:51:25.758Z] Copying: 145/1024 [MB] (22 MBps) [2024-10-13T17:51:26.700Z] Copying: 158/1024 [MB] (13 MBps) [2024-10-13T17:51:27.641Z] Copying: 173/1024 [MB] (15 MBps) [2024-10-13T17:51:28.585Z] Copying: 193/1024 [MB] (19 MBps) [2024-10-13T17:51:29.528Z] Copying: 216/1024 [MB] (23 MBps) [2024-10-13T17:51:30.915Z] Copying: 235/1024 [MB] (18 MBps) [2024-10-13T17:51:31.488Z] Copying: 255/1024 [MB] (20 MBps) [2024-10-13T17:51:32.905Z] Copying: 281/1024 [MB] (25 MBps) [2024-10-13T17:51:33.850Z] Copying: 291/1024 [MB] (10 MBps) [2024-10-13T17:51:34.794Z] Copying: 313/1024 [MB] (21 MBps) [2024-10-13T17:51:35.739Z] Copying: 334/1024 [MB] (21 MBps) [2024-10-13T17:51:36.684Z] Copying: 355/1024 [MB] (21 MBps) [2024-10-13T17:51:37.629Z] Copying: 368/1024 [MB] (12 MBps) [2024-10-13T17:51:38.573Z] Copying: 385/1024 [MB] (16 MBps) [2024-10-13T17:51:39.519Z] Copying: 404/1024 [MB] (18 MBps) [2024-10-13T17:51:40.906Z] Copying: 419/1024 [MB] (15 MBps) [2024-10-13T17:51:41.852Z] Copying: 432/1024 [MB] (13 MBps) [2024-10-13T17:51:42.796Z] Copying: 453/1024 [MB] (21 MBps) [2024-10-13T17:51:43.739Z] Copying: 478/1024 [MB] (24 MBps) [2024-10-13T17:51:44.684Z] Copying: 503/1024 [MB] (25 MBps) [2024-10-13T17:51:45.627Z] Copying: 519/1024 [MB] (15 MBps) [2024-10-13T17:51:46.570Z] Copying: 536/1024 [MB] (16 MBps) [2024-10-13T17:51:47.516Z] Copying: 558/1024 [MB] (22 MBps) [2024-10-13T17:51:48.905Z] Copying: 574/1024 [MB] (15 MBps) [2024-10-13T17:51:49.850Z] Copying: 586/1024 [MB] (11 MBps) [2024-10-13T17:51:50.794Z] Copying: 598/1024 [MB] (12 MBps) [2024-10-13T17:51:51.738Z] Copying: 621/1024 [MB] (23 MBps) [2024-10-13T17:51:52.711Z] Copying: 634/1024 [MB] (13 MBps) [2024-10-13T17:51:53.654Z] Copying: 658/1024 [MB] (23 MBps) [2024-10-13T17:51:54.597Z] Copying: 675/1024 [MB] (16 MBps) [2024-10-13T17:51:55.539Z] Copying: 685/1024 [MB] (10 MBps) [2024-10-13T17:51:56.927Z] Copying: 703/1024 [MB] (18 MBps) [2024-10-13T17:51:57.499Z] Copying: 715/1024 [MB] (11 MBps) [2024-10-13T17:51:58.887Z] Copying: 732/1024 [MB] (17 MBps) [2024-10-13T17:51:59.833Z] Copying: 746/1024 [MB] (14 MBps) [2024-10-13T17:52:00.776Z] Copying: 764/1024 [MB] (18 MBps) [2024-10-13T17:52:01.720Z] Copying: 782/1024 [MB] (17 MBps) [2024-10-13T17:52:02.662Z] Copying: 801/1024 [MB] (18 MBps) [2024-10-13T17:52:03.606Z] Copying: 824/1024 [MB] (22 MBps) [2024-10-13T17:52:04.550Z] Copying: 836/1024 [MB] (12 MBps) [2024-10-13T17:52:05.501Z] Copying: 855/1024 [MB] (18 MBps) [2024-10-13T17:52:06.888Z] Copying: 875/1024 [MB] (20 MBps) [2024-10-13T17:52:07.832Z] Copying: 896/1024 [MB] (21 MBps) [2024-10-13T17:52:08.776Z] Copying: 919/1024 [MB] (22 MBps) [2024-10-13T17:52:09.720Z] Copying: 935/1024 [MB] (15 MBps) [2024-10-13T17:52:10.663Z] Copying: 951/1024 [MB] (16 MBps) [2024-10-13T17:52:11.608Z] Copying: 966/1024 [MB] (15 MBps) [2024-10-13T17:52:12.579Z] Copying: 980/1024 [MB] (13 MBps) [2024-10-13T17:52:13.523Z] Copying: 1001/1024 [MB] (21 MBps) [2024-10-13T17:52:13.784Z] Copying: 1023/1024 [MB] (21 MBps) [2024-10-13T17:52:14.045Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-10-13 17:52:13.872260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.231 [2024-10-13 17:52:13.872350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:24.231 [2024-10-13 17:52:13.872370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:24.231 [2024-10-13 17:52:13.872380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.231 [2024-10-13 17:52:13.872406] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:24.231 [2024-10-13 17:52:13.875706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.231 [2024-10-13 17:52:13.875756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:24.231 [2024-10-13 17:52:13.875769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.272 ms 00:22:24.231 [2024-10-13 17:52:13.875778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.231 [2024-10-13 17:52:13.876055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.231 [2024-10-13 17:52:13.876067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:24.231 [2024-10-13 17:52:13.876077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:22:24.231 [2024-10-13 17:52:13.876087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.231 [2024-10-13 17:52:13.882814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.231 [2024-10-13 17:52:13.883003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:24.231 [2024-10-13 17:52:13.883017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.706 ms 00:22:24.231 [2024-10-13 17:52:13.883026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.231 [2024-10-13 17:52:13.891218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.231 [2024-10-13 17:52:13.891273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:24.231 [2024-10-13 17:52:13.891286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.144 ms 00:22:24.231 [2024-10-13 17:52:13.891295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.231 [2024-10-13 17:52:13.919432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.231 [2024-10-13 17:52:13.919488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:24.231 [2024-10-13 17:52:13.919503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.069 ms 00:22:24.231 [2024-10-13 17:52:13.919512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.231 [2024-10-13 17:52:13.936646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.231 [2024-10-13 17:52:13.936700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:24.231 [2024-10-13 17:52:13.936721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.068 ms 00:22:24.231 [2024-10-13 17:52:13.936731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.493 [2024-10-13 17:52:14.157498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.493 [2024-10-13 17:52:14.157581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:24.493 [2024-10-13 17:52:14.157597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 220.709 ms 00:22:24.493 [2024-10-13 17:52:14.157606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.493 [2024-10-13 17:52:14.184120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.493 [2024-10-13 17:52:14.184172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:24.493 [2024-10-13 17:52:14.184187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.494 ms 00:22:24.493 [2024-10-13 17:52:14.184195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.493 [2024-10-13 17:52:14.210037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.493 [2024-10-13 17:52:14.210087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:24.493 [2024-10-13 17:52:14.210115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.792 ms 00:22:24.493 [2024-10-13 17:52:14.210123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.493 [2024-10-13 17:52:14.235256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.493 [2024-10-13 17:52:14.235305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:24.493 [2024-10-13 17:52:14.235318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.083 ms 00:22:24.493 [2024-10-13 17:52:14.235326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.493 [2024-10-13 17:52:14.260432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.493 [2024-10-13 17:52:14.260491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:24.493 [2024-10-13 17:52:14.260504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.015 ms 00:22:24.493 [2024-10-13 17:52:14.260512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.493 [2024-10-13 17:52:14.260579] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:24.493 [2024-10-13 17:52:14.260599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:22:24.493 [2024-10-13 17:52:14.260614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:24.493 [2024-10-13 17:52:14.260984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.260991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.260999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:24.494 [2024-10-13 17:52:14.261442] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:24.494 [2024-10-13 17:52:14.261452] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 90a54ddc-e2a7-4828-bef8-d66a300cc8d9 00:22:24.494 [2024-10-13 17:52:14.261461] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:22:24.494 [2024-10-13 17:52:14.261469] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 33472 00:22:24.494 [2024-10-13 17:52:14.261478] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 32512 00:22:24.494 [2024-10-13 17:52:14.261489] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0295 00:22:24.494 [2024-10-13 17:52:14.261497] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:24.494 [2024-10-13 17:52:14.261507] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:24.494 [2024-10-13 17:52:14.261516] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:24.494 [2024-10-13 17:52:14.261535] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:24.494 [2024-10-13 17:52:14.261542] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:24.494 [2024-10-13 17:52:14.261551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.494 [2024-10-13 17:52:14.261583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:24.494 [2024-10-13 17:52:14.261594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.974 ms 00:22:24.494 [2024-10-13 17:52:14.261602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.494 [2024-10-13 17:52:14.276165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.494 [2024-10-13 17:52:14.276211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:24.494 [2024-10-13 17:52:14.276223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.542 ms 00:22:24.494 [2024-10-13 17:52:14.276232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.494 [2024-10-13 17:52:14.276706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.494 [2024-10-13 17:52:14.276726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:24.494 [2024-10-13 17:52:14.276738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:22:24.494 [2024-10-13 17:52:14.276748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.755 [2024-10-13 17:52:14.316239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:24.755 [2024-10-13 17:52:14.316294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:24.755 [2024-10-13 17:52:14.316308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:24.755 [2024-10-13 17:52:14.316321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.755 [2024-10-13 17:52:14.316391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:24.755 [2024-10-13 17:52:14.316402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:24.755 [2024-10-13 17:52:14.316413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:24.755 [2024-10-13 17:52:14.316423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.755 [2024-10-13 17:52:14.316491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:24.755 [2024-10-13 17:52:14.316503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:24.755 [2024-10-13 17:52:14.316512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:24.755 [2024-10-13 17:52:14.316520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.755 [2024-10-13 17:52:14.316541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:24.755 [2024-10-13 17:52:14.316550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:24.755 [2024-10-13 17:52:14.316573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:24.755 [2024-10-13 17:52:14.316582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.755 [2024-10-13 17:52:14.407460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:24.755 [2024-10-13 17:52:14.407530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:24.755 [2024-10-13 17:52:14.407545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:24.755 [2024-10-13 17:52:14.407572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.755 [2024-10-13 17:52:14.482060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:24.755 [2024-10-13 17:52:14.482135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:24.755 [2024-10-13 17:52:14.482149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:24.755 [2024-10-13 17:52:14.482161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.755 [2024-10-13 17:52:14.482269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:24.755 [2024-10-13 17:52:14.482280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:24.755 [2024-10-13 17:52:14.482290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:24.755 [2024-10-13 17:52:14.482301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.755 [2024-10-13 17:52:14.482344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:24.755 [2024-10-13 17:52:14.482362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:24.755 [2024-10-13 17:52:14.482372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:24.755 [2024-10-13 17:52:14.482381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.755 [2024-10-13 17:52:14.482494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:24.755 [2024-10-13 17:52:14.482505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:24.755 [2024-10-13 17:52:14.482514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:24.755 [2024-10-13 17:52:14.482523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.755 [2024-10-13 17:52:14.482583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:24.755 [2024-10-13 17:52:14.482599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:24.755 [2024-10-13 17:52:14.482609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:24.755 [2024-10-13 17:52:14.482619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.755 [2024-10-13 17:52:14.482674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:24.755 [2024-10-13 17:52:14.482685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:24.755 [2024-10-13 17:52:14.482695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:24.755 [2024-10-13 17:52:14.482703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.755 [2024-10-13 17:52:14.482762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:24.755 [2024-10-13 17:52:14.482779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:24.755 [2024-10-13 17:52:14.482789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:24.755 [2024-10-13 17:52:14.482798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.756 [2024-10-13 17:52:14.482960] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 610.655 ms, result 0 00:22:25.697 00:22:25.697 00:22:25.697 17:52:15 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:28.245 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:28.245 17:52:17 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:28.245 17:52:17 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:22:28.245 17:52:17 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:28.245 17:52:17 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:28.245 17:52:17 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:28.245 17:52:17 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 74997 00:22:28.245 17:52:17 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 74997 ']' 00:22:28.245 Process with pid 74997 is not found 00:22:28.245 17:52:17 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 74997 00:22:28.245 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (74997) - No such process 00:22:28.245 17:52:17 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 74997 is not found' 00:22:28.245 17:52:17 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:22:28.245 Remove shared memory files 00:22:28.245 17:52:17 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:28.245 17:52:17 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:22:28.245 17:52:17 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:22:28.245 17:52:17 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:22:28.245 17:52:17 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:28.245 17:52:17 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:22:28.245 00:22:28.245 real 4m42.244s 00:22:28.245 user 4m29.503s 00:22:28.245 sys 0m12.602s 00:22:28.245 17:52:17 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:28.245 ************************************ 00:22:28.245 END TEST ftl_restore 00:22:28.245 ************************************ 00:22:28.245 17:52:17 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:28.245 17:52:17 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:28.245 17:52:17 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:22:28.245 17:52:17 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:28.245 17:52:17 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:28.245 ************************************ 00:22:28.245 START TEST ftl_dirty_shutdown 00:22:28.245 ************************************ 00:22:28.245 17:52:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:28.245 * Looking for test storage... 00:22:28.245 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:28.245 17:52:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:28.245 17:52:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:22:28.245 17:52:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:28.245 17:52:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:28.245 17:52:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:28.245 17:52:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:28.245 17:52:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:28.245 17:52:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:28.245 17:52:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:28.245 17:52:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:28.245 17:52:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:28.245 17:52:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:28.245 17:52:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:28.245 17:52:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:28.245 17:52:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:28.245 17:52:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:28.245 17:52:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:22:28.245 17:52:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:28.245 17:52:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:28.245 17:52:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:28.245 17:52:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:28.245 17:52:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:28.245 17:52:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:28.245 17:52:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:28.245 17:52:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:28.245 17:52:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:28.245 17:52:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:28.245 17:52:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:28.245 17:52:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:28.245 17:52:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:28.245 17:52:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:28.245 17:52:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:22:28.245 17:52:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:28.245 17:52:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:28.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.245 --rc genhtml_branch_coverage=1 00:22:28.245 --rc genhtml_function_coverage=1 00:22:28.245 --rc genhtml_legend=1 00:22:28.245 --rc geninfo_all_blocks=1 00:22:28.245 --rc geninfo_unexecuted_blocks=1 00:22:28.245 00:22:28.245 ' 00:22:28.245 17:52:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:28.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.245 --rc genhtml_branch_coverage=1 00:22:28.246 --rc genhtml_function_coverage=1 00:22:28.246 --rc genhtml_legend=1 00:22:28.246 --rc geninfo_all_blocks=1 00:22:28.246 --rc geninfo_unexecuted_blocks=1 00:22:28.246 00:22:28.246 ' 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:28.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.246 --rc genhtml_branch_coverage=1 00:22:28.246 --rc genhtml_function_coverage=1 00:22:28.246 --rc genhtml_legend=1 00:22:28.246 --rc geninfo_all_blocks=1 00:22:28.246 --rc geninfo_unexecuted_blocks=1 00:22:28.246 00:22:28.246 ' 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:28.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.246 --rc genhtml_branch_coverage=1 00:22:28.246 --rc genhtml_function_coverage=1 00:22:28.246 --rc genhtml_legend=1 00:22:28.246 --rc geninfo_all_blocks=1 00:22:28.246 --rc geninfo_unexecuted_blocks=1 00:22:28.246 00:22:28.246 ' 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=78043 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 78043 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 78043 ']' 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:28.246 17:52:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:28.246 [2024-10-13 17:52:17.959117] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:22:28.246 [2024-10-13 17:52:17.959350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78043 ] 00:22:28.507 [2024-10-13 17:52:18.109915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.507 [2024-10-13 17:52:18.223406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.493 17:52:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:29.493 17:52:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:22:29.493 17:52:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:29.493 17:52:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:22:29.493 17:52:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:29.493 17:52:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:22:29.493 17:52:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:22:29.493 17:52:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:29.755 17:52:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:29.755 17:52:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:22:29.755 17:52:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:29.755 17:52:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:22:29.755 17:52:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:29.755 17:52:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:29.755 17:52:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:29.755 17:52:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:29.755 17:52:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:29.755 { 00:22:29.755 "name": "nvme0n1", 00:22:29.755 "aliases": [ 00:22:29.755 "bf7a52a5-1e49-4e9c-b277-30439aa34cae" 00:22:29.755 ], 00:22:29.755 "product_name": "NVMe disk", 00:22:29.755 "block_size": 4096, 00:22:29.755 "num_blocks": 1310720, 00:22:29.755 "uuid": "bf7a52a5-1e49-4e9c-b277-30439aa34cae", 00:22:29.755 "numa_id": -1, 00:22:29.755 "assigned_rate_limits": { 00:22:29.755 "rw_ios_per_sec": 0, 00:22:29.755 "rw_mbytes_per_sec": 0, 00:22:29.755 "r_mbytes_per_sec": 0, 00:22:29.755 "w_mbytes_per_sec": 0 00:22:29.755 }, 00:22:29.755 "claimed": true, 00:22:29.755 "claim_type": "read_many_write_one", 00:22:29.755 "zoned": false, 00:22:29.755 "supported_io_types": { 00:22:29.755 "read": true, 00:22:29.755 "write": true, 00:22:29.755 "unmap": true, 00:22:29.755 "flush": true, 00:22:29.755 "reset": true, 00:22:29.755 "nvme_admin": true, 00:22:29.755 "nvme_io": true, 00:22:29.755 "nvme_io_md": false, 00:22:29.755 "write_zeroes": true, 00:22:29.755 "zcopy": false, 00:22:29.755 "get_zone_info": false, 00:22:29.755 "zone_management": false, 00:22:29.755 "zone_append": false, 00:22:29.755 "compare": true, 00:22:29.755 "compare_and_write": false, 00:22:29.755 "abort": true, 00:22:29.755 "seek_hole": false, 00:22:29.755 "seek_data": false, 00:22:29.755 "copy": true, 00:22:29.755 "nvme_iov_md": false 00:22:29.755 }, 00:22:29.755 "driver_specific": { 00:22:29.755 "nvme": [ 00:22:29.755 { 00:22:29.755 "pci_address": "0000:00:11.0", 00:22:29.755 "trid": { 00:22:29.755 "trtype": "PCIe", 00:22:29.755 "traddr": "0000:00:11.0" 00:22:29.755 }, 00:22:29.755 "ctrlr_data": { 00:22:29.755 "cntlid": 0, 00:22:29.755 "vendor_id": "0x1b36", 00:22:29.755 "model_number": "QEMU NVMe Ctrl", 00:22:29.755 "serial_number": "12341", 00:22:29.755 "firmware_revision": "8.0.0", 00:22:29.755 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:29.755 "oacs": { 00:22:29.755 "security": 0, 00:22:29.755 "format": 1, 00:22:29.755 "firmware": 0, 00:22:29.755 "ns_manage": 1 00:22:29.755 }, 00:22:29.755 "multi_ctrlr": false, 00:22:29.755 "ana_reporting": false 00:22:29.755 }, 00:22:29.755 "vs": { 00:22:29.755 "nvme_version": "1.4" 00:22:29.755 }, 00:22:29.755 "ns_data": { 00:22:29.755 "id": 1, 00:22:29.755 "can_share": false 00:22:29.755 } 00:22:29.755 } 00:22:29.755 ], 00:22:29.755 "mp_policy": "active_passive" 00:22:29.755 } 00:22:29.755 } 00:22:29.755 ]' 00:22:29.755 17:52:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:30.029 17:52:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:30.029 17:52:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:30.030 17:52:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:22:30.030 17:52:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:22:30.030 17:52:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:22:30.030 17:52:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:22:30.030 17:52:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:30.030 17:52:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:22:30.030 17:52:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:30.030 17:52:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:30.291 17:52:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=bcd51b23-6873-4ccf-afa8-01e6f88fbf9e 00:22:30.291 17:52:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:22:30.291 17:52:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bcd51b23-6873-4ccf-afa8-01e6f88fbf9e 00:22:30.291 17:52:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:30.552 17:52:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=1409c982-e896-49e1-9aed-4e02ac888fc1 00:22:30.552 17:52:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 1409c982-e896-49e1-9aed-4e02ac888fc1 00:22:30.813 17:52:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=cd2e2afd-0dd0-4eb0-8454-12505b162199 00:22:30.813 17:52:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:22:30.814 17:52:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 cd2e2afd-0dd0-4eb0-8454-12505b162199 00:22:30.814 17:52:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:22:30.814 17:52:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:30.814 17:52:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=cd2e2afd-0dd0-4eb0-8454-12505b162199 00:22:30.814 17:52:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:22:30.814 17:52:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size cd2e2afd-0dd0-4eb0-8454-12505b162199 00:22:30.814 17:52:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=cd2e2afd-0dd0-4eb0-8454-12505b162199 00:22:30.814 17:52:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:30.814 17:52:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:30.814 17:52:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:30.814 17:52:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cd2e2afd-0dd0-4eb0-8454-12505b162199 00:22:31.075 17:52:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:31.075 { 00:22:31.075 "name": "cd2e2afd-0dd0-4eb0-8454-12505b162199", 00:22:31.075 "aliases": [ 00:22:31.075 "lvs/nvme0n1p0" 00:22:31.075 ], 00:22:31.075 "product_name": "Logical Volume", 00:22:31.075 "block_size": 4096, 00:22:31.075 "num_blocks": 26476544, 00:22:31.075 "uuid": "cd2e2afd-0dd0-4eb0-8454-12505b162199", 00:22:31.075 "assigned_rate_limits": { 00:22:31.075 "rw_ios_per_sec": 0, 00:22:31.075 "rw_mbytes_per_sec": 0, 00:22:31.075 "r_mbytes_per_sec": 0, 00:22:31.075 "w_mbytes_per_sec": 0 00:22:31.075 }, 00:22:31.075 "claimed": false, 00:22:31.075 "zoned": false, 00:22:31.075 "supported_io_types": { 00:22:31.075 "read": true, 00:22:31.075 "write": true, 00:22:31.075 "unmap": true, 00:22:31.075 "flush": false, 00:22:31.075 "reset": true, 00:22:31.075 "nvme_admin": false, 00:22:31.075 "nvme_io": false, 00:22:31.075 "nvme_io_md": false, 00:22:31.075 "write_zeroes": true, 00:22:31.075 "zcopy": false, 00:22:31.075 "get_zone_info": false, 00:22:31.075 "zone_management": false, 00:22:31.075 "zone_append": false, 00:22:31.075 "compare": false, 00:22:31.075 "compare_and_write": false, 00:22:31.075 "abort": false, 00:22:31.075 "seek_hole": true, 00:22:31.075 "seek_data": true, 00:22:31.075 "copy": false, 00:22:31.075 "nvme_iov_md": false 00:22:31.075 }, 00:22:31.075 "driver_specific": { 00:22:31.075 "lvol": { 00:22:31.075 "lvol_store_uuid": "1409c982-e896-49e1-9aed-4e02ac888fc1", 00:22:31.075 "base_bdev": "nvme0n1", 00:22:31.075 "thin_provision": true, 00:22:31.075 "num_allocated_clusters": 0, 00:22:31.075 "snapshot": false, 00:22:31.075 "clone": false, 00:22:31.075 "esnap_clone": false 00:22:31.075 } 00:22:31.075 } 00:22:31.075 } 00:22:31.075 ]' 00:22:31.075 17:52:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:31.075 17:52:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:31.075 17:52:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:31.075 17:52:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:31.075 17:52:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:31.075 17:52:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:22:31.075 17:52:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:22:31.075 17:52:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:22:31.075 17:52:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:31.336 17:52:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:31.336 17:52:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:31.336 17:52:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size cd2e2afd-0dd0-4eb0-8454-12505b162199 00:22:31.336 17:52:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=cd2e2afd-0dd0-4eb0-8454-12505b162199 00:22:31.336 17:52:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:31.336 17:52:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:31.336 17:52:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:31.336 17:52:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cd2e2afd-0dd0-4eb0-8454-12505b162199 00:22:31.597 17:52:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:31.597 { 00:22:31.597 "name": "cd2e2afd-0dd0-4eb0-8454-12505b162199", 00:22:31.597 "aliases": [ 00:22:31.597 "lvs/nvme0n1p0" 00:22:31.597 ], 00:22:31.597 "product_name": "Logical Volume", 00:22:31.597 "block_size": 4096, 00:22:31.597 "num_blocks": 26476544, 00:22:31.597 "uuid": "cd2e2afd-0dd0-4eb0-8454-12505b162199", 00:22:31.597 "assigned_rate_limits": { 00:22:31.597 "rw_ios_per_sec": 0, 00:22:31.597 "rw_mbytes_per_sec": 0, 00:22:31.597 "r_mbytes_per_sec": 0, 00:22:31.597 "w_mbytes_per_sec": 0 00:22:31.597 }, 00:22:31.597 "claimed": false, 00:22:31.597 "zoned": false, 00:22:31.597 "supported_io_types": { 00:22:31.597 "read": true, 00:22:31.597 "write": true, 00:22:31.597 "unmap": true, 00:22:31.597 "flush": false, 00:22:31.597 "reset": true, 00:22:31.597 "nvme_admin": false, 00:22:31.597 "nvme_io": false, 00:22:31.597 "nvme_io_md": false, 00:22:31.597 "write_zeroes": true, 00:22:31.597 "zcopy": false, 00:22:31.597 "get_zone_info": false, 00:22:31.597 "zone_management": false, 00:22:31.597 "zone_append": false, 00:22:31.597 "compare": false, 00:22:31.597 "compare_and_write": false, 00:22:31.597 "abort": false, 00:22:31.597 "seek_hole": true, 00:22:31.597 "seek_data": true, 00:22:31.597 "copy": false, 00:22:31.597 "nvme_iov_md": false 00:22:31.597 }, 00:22:31.597 "driver_specific": { 00:22:31.597 "lvol": { 00:22:31.597 "lvol_store_uuid": "1409c982-e896-49e1-9aed-4e02ac888fc1", 00:22:31.597 "base_bdev": "nvme0n1", 00:22:31.597 "thin_provision": true, 00:22:31.597 "num_allocated_clusters": 0, 00:22:31.597 "snapshot": false, 00:22:31.597 "clone": false, 00:22:31.597 "esnap_clone": false 00:22:31.597 } 00:22:31.597 } 00:22:31.597 } 00:22:31.597 ]' 00:22:31.597 17:52:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:31.597 17:52:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:31.597 17:52:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:31.597 17:52:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:31.597 17:52:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:31.597 17:52:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:22:31.597 17:52:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:22:31.597 17:52:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:31.860 17:52:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:22:31.860 17:52:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size cd2e2afd-0dd0-4eb0-8454-12505b162199 00:22:31.860 17:52:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=cd2e2afd-0dd0-4eb0-8454-12505b162199 00:22:31.860 17:52:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:31.860 17:52:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:31.860 17:52:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:31.860 17:52:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cd2e2afd-0dd0-4eb0-8454-12505b162199 00:22:32.121 17:52:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:32.121 { 00:22:32.121 "name": "cd2e2afd-0dd0-4eb0-8454-12505b162199", 00:22:32.121 "aliases": [ 00:22:32.121 "lvs/nvme0n1p0" 00:22:32.121 ], 00:22:32.121 "product_name": "Logical Volume", 00:22:32.121 "block_size": 4096, 00:22:32.121 "num_blocks": 26476544, 00:22:32.121 "uuid": "cd2e2afd-0dd0-4eb0-8454-12505b162199", 00:22:32.121 "assigned_rate_limits": { 00:22:32.121 "rw_ios_per_sec": 0, 00:22:32.121 "rw_mbytes_per_sec": 0, 00:22:32.121 "r_mbytes_per_sec": 0, 00:22:32.121 "w_mbytes_per_sec": 0 00:22:32.121 }, 00:22:32.121 "claimed": false, 00:22:32.121 "zoned": false, 00:22:32.121 "supported_io_types": { 00:22:32.121 "read": true, 00:22:32.121 "write": true, 00:22:32.121 "unmap": true, 00:22:32.121 "flush": false, 00:22:32.121 "reset": true, 00:22:32.121 "nvme_admin": false, 00:22:32.121 "nvme_io": false, 00:22:32.121 "nvme_io_md": false, 00:22:32.121 "write_zeroes": true, 00:22:32.121 "zcopy": false, 00:22:32.121 "get_zone_info": false, 00:22:32.121 "zone_management": false, 00:22:32.121 "zone_append": false, 00:22:32.121 "compare": false, 00:22:32.121 "compare_and_write": false, 00:22:32.121 "abort": false, 00:22:32.121 "seek_hole": true, 00:22:32.121 "seek_data": true, 00:22:32.121 "copy": false, 00:22:32.121 "nvme_iov_md": false 00:22:32.121 }, 00:22:32.121 "driver_specific": { 00:22:32.121 "lvol": { 00:22:32.121 "lvol_store_uuid": "1409c982-e896-49e1-9aed-4e02ac888fc1", 00:22:32.121 "base_bdev": "nvme0n1", 00:22:32.121 "thin_provision": true, 00:22:32.121 "num_allocated_clusters": 0, 00:22:32.121 "snapshot": false, 00:22:32.121 "clone": false, 00:22:32.121 "esnap_clone": false 00:22:32.121 } 00:22:32.121 } 00:22:32.121 } 00:22:32.121 ]' 00:22:32.121 17:52:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:32.121 17:52:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:32.121 17:52:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:32.121 17:52:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:32.121 17:52:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:32.121 17:52:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:22:32.121 17:52:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:22:32.121 17:52:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d cd2e2afd-0dd0-4eb0-8454-12505b162199 --l2p_dram_limit 10' 00:22:32.121 17:52:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:22:32.121 17:52:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:22:32.121 17:52:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:32.121 17:52:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d cd2e2afd-0dd0-4eb0-8454-12505b162199 --l2p_dram_limit 10 -c nvc0n1p0 00:22:32.381 [2024-10-13 17:52:22.014303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.381 [2024-10-13 17:52:22.014350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:32.381 [2024-10-13 17:52:22.014365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:32.381 [2024-10-13 17:52:22.014373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.381 [2024-10-13 17:52:22.014428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.381 [2024-10-13 17:52:22.014438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:32.381 [2024-10-13 17:52:22.014447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:22:32.381 [2024-10-13 17:52:22.014453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.381 [2024-10-13 17:52:22.014474] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:32.381 [2024-10-13 17:52:22.015130] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:32.381 [2024-10-13 17:52:22.015152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.381 [2024-10-13 17:52:22.015159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:32.381 [2024-10-13 17:52:22.015168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.683 ms 00:22:32.381 [2024-10-13 17:52:22.015174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.381 [2024-10-13 17:52:22.015234] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4cf00a7e-b346-4e16-a62c-1327e6d2445c 00:22:32.381 [2024-10-13 17:52:22.016552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.381 [2024-10-13 17:52:22.016596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:32.381 [2024-10-13 17:52:22.016606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:22:32.381 [2024-10-13 17:52:22.016618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.381 [2024-10-13 17:52:22.023455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.381 [2024-10-13 17:52:22.023486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:32.381 [2024-10-13 17:52:22.023494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.800 ms 00:22:32.381 [2024-10-13 17:52:22.023502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.381 [2024-10-13 17:52:22.023587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.381 [2024-10-13 17:52:22.023596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:32.381 [2024-10-13 17:52:22.023603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:22:32.381 [2024-10-13 17:52:22.023614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.381 [2024-10-13 17:52:22.023654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.381 [2024-10-13 17:52:22.023663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:32.381 [2024-10-13 17:52:22.023670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:32.381 [2024-10-13 17:52:22.023678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.381 [2024-10-13 17:52:22.023697] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:32.381 [2024-10-13 17:52:22.026974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.381 [2024-10-13 17:52:22.026999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:32.381 [2024-10-13 17:52:22.027009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.281 ms 00:22:32.381 [2024-10-13 17:52:22.027019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.381 [2024-10-13 17:52:22.027047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.381 [2024-10-13 17:52:22.027055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:32.381 [2024-10-13 17:52:22.027063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:32.381 [2024-10-13 17:52:22.027069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.381 [2024-10-13 17:52:22.027083] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:32.381 [2024-10-13 17:52:22.027196] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:32.381 [2024-10-13 17:52:22.027213] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:32.381 [2024-10-13 17:52:22.027223] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:32.381 [2024-10-13 17:52:22.027233] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:32.381 [2024-10-13 17:52:22.027241] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:32.381 [2024-10-13 17:52:22.027249] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:32.381 [2024-10-13 17:52:22.027256] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:32.381 [2024-10-13 17:52:22.027263] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:32.381 [2024-10-13 17:52:22.027269] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:32.381 [2024-10-13 17:52:22.027277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.381 [2024-10-13 17:52:22.027284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:32.381 [2024-10-13 17:52:22.027292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:22:32.381 [2024-10-13 17:52:22.027303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.381 [2024-10-13 17:52:22.027370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.381 [2024-10-13 17:52:22.027377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:32.381 [2024-10-13 17:52:22.027384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:22:32.381 [2024-10-13 17:52:22.027390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.381 [2024-10-13 17:52:22.027466] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:32.381 [2024-10-13 17:52:22.027482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:32.382 [2024-10-13 17:52:22.027492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:32.382 [2024-10-13 17:52:22.027499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:32.382 [2024-10-13 17:52:22.027507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:32.382 [2024-10-13 17:52:22.027512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:32.382 [2024-10-13 17:52:22.027519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:32.382 [2024-10-13 17:52:22.027524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:32.382 [2024-10-13 17:52:22.027530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:32.382 [2024-10-13 17:52:22.027535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:32.382 [2024-10-13 17:52:22.027542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:32.382 [2024-10-13 17:52:22.027547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:32.382 [2024-10-13 17:52:22.027571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:32.382 [2024-10-13 17:52:22.027578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:32.382 [2024-10-13 17:52:22.027587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:32.382 [2024-10-13 17:52:22.027593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:32.382 [2024-10-13 17:52:22.027601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:32.382 [2024-10-13 17:52:22.027607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:32.382 [2024-10-13 17:52:22.027613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:32.382 [2024-10-13 17:52:22.027619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:32.382 [2024-10-13 17:52:22.027627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:32.382 [2024-10-13 17:52:22.027632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:32.382 [2024-10-13 17:52:22.027639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:32.382 [2024-10-13 17:52:22.027644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:32.382 [2024-10-13 17:52:22.027651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:32.382 [2024-10-13 17:52:22.027656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:32.382 [2024-10-13 17:52:22.027663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:32.382 [2024-10-13 17:52:22.027668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:32.382 [2024-10-13 17:52:22.027674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:32.382 [2024-10-13 17:52:22.027680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:32.382 [2024-10-13 17:52:22.027686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:32.382 [2024-10-13 17:52:22.027691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:32.382 [2024-10-13 17:52:22.027699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:32.382 [2024-10-13 17:52:22.027704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:32.382 [2024-10-13 17:52:22.027711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:32.382 [2024-10-13 17:52:22.027717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:32.382 [2024-10-13 17:52:22.027723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:32.382 [2024-10-13 17:52:22.027728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:32.382 [2024-10-13 17:52:22.027735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:32.382 [2024-10-13 17:52:22.027739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:32.382 [2024-10-13 17:52:22.027746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:32.382 [2024-10-13 17:52:22.027751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:32.382 [2024-10-13 17:52:22.027757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:32.382 [2024-10-13 17:52:22.027762] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:32.382 [2024-10-13 17:52:22.027770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:32.382 [2024-10-13 17:52:22.027775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:32.382 [2024-10-13 17:52:22.027783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:32.382 [2024-10-13 17:52:22.027789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:32.382 [2024-10-13 17:52:22.027798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:32.382 [2024-10-13 17:52:22.027803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:32.382 [2024-10-13 17:52:22.027810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:32.382 [2024-10-13 17:52:22.027815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:32.382 [2024-10-13 17:52:22.027822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:32.382 [2024-10-13 17:52:22.027830] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:32.382 [2024-10-13 17:52:22.027839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:32.382 [2024-10-13 17:52:22.027846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:32.382 [2024-10-13 17:52:22.027860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:32.382 [2024-10-13 17:52:22.027866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:32.382 [2024-10-13 17:52:22.027872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:32.382 [2024-10-13 17:52:22.027878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:32.382 [2024-10-13 17:52:22.027885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:32.382 [2024-10-13 17:52:22.027891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:32.382 [2024-10-13 17:52:22.027898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:32.382 [2024-10-13 17:52:22.027903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:32.382 [2024-10-13 17:52:22.027912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:32.382 [2024-10-13 17:52:22.027918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:32.382 [2024-10-13 17:52:22.027925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:32.382 [2024-10-13 17:52:22.027930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:32.382 [2024-10-13 17:52:22.027938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:32.382 [2024-10-13 17:52:22.027943] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:32.382 [2024-10-13 17:52:22.027950] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:32.382 [2024-10-13 17:52:22.027959] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:32.382 [2024-10-13 17:52:22.027966] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:32.382 [2024-10-13 17:52:22.027971] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:32.382 [2024-10-13 17:52:22.027978] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:32.382 [2024-10-13 17:52:22.027984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.382 [2024-10-13 17:52:22.027993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:32.382 [2024-10-13 17:52:22.028000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:22:32.382 [2024-10-13 17:52:22.028010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.382 [2024-10-13 17:52:22.028053] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:32.382 [2024-10-13 17:52:22.028066] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:36.587 [2024-10-13 17:52:26.004141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.587 [2024-10-13 17:52:26.004229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:36.587 [2024-10-13 17:52:26.004250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3976.072 ms 00:22:36.587 [2024-10-13 17:52:26.004263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.587 [2024-10-13 17:52:26.042615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.587 [2024-10-13 17:52:26.042693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:36.587 [2024-10-13 17:52:26.042711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.567 ms 00:22:36.587 [2024-10-13 17:52:26.042723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.587 [2024-10-13 17:52:26.042890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.587 [2024-10-13 17:52:26.042907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:36.587 [2024-10-13 17:52:26.042918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:22:36.587 [2024-10-13 17:52:26.042933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.587 [2024-10-13 17:52:26.082621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.587 [2024-10-13 17:52:26.082685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:36.587 [2024-10-13 17:52:26.082699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.650 ms 00:22:36.587 [2024-10-13 17:52:26.082710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.587 [2024-10-13 17:52:26.082748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.587 [2024-10-13 17:52:26.082760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:36.587 [2024-10-13 17:52:26.082769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:36.587 [2024-10-13 17:52:26.082784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.587 [2024-10-13 17:52:26.083513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.587 [2024-10-13 17:52:26.083554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:36.587 [2024-10-13 17:52:26.083582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.653 ms 00:22:36.587 [2024-10-13 17:52:26.083593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.587 [2024-10-13 17:52:26.083718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.587 [2024-10-13 17:52:26.083732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:36.587 [2024-10-13 17:52:26.083742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:22:36.587 [2024-10-13 17:52:26.083756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.587 [2024-10-13 17:52:26.104068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.587 [2024-10-13 17:52:26.104119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:36.587 [2024-10-13 17:52:26.104130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.288 ms 00:22:36.587 [2024-10-13 17:52:26.104144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.587 [2024-10-13 17:52:26.119094] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:36.587 [2024-10-13 17:52:26.124099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.587 [2024-10-13 17:52:26.124143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:36.587 [2024-10-13 17:52:26.124158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.861 ms 00:22:36.587 [2024-10-13 17:52:26.124166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.587 [2024-10-13 17:52:26.227830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.587 [2024-10-13 17:52:26.227911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:36.587 [2024-10-13 17:52:26.227932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.626 ms 00:22:36.587 [2024-10-13 17:52:26.227942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.587 [2024-10-13 17:52:26.228168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.587 [2024-10-13 17:52:26.228182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:36.587 [2024-10-13 17:52:26.228199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:22:36.587 [2024-10-13 17:52:26.228210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.587 [2024-10-13 17:52:26.254266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.587 [2024-10-13 17:52:26.254319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:36.587 [2024-10-13 17:52:26.254335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.998 ms 00:22:36.587 [2024-10-13 17:52:26.254344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.588 [2024-10-13 17:52:26.279777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.588 [2024-10-13 17:52:26.279822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:36.588 [2024-10-13 17:52:26.279838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.379 ms 00:22:36.588 [2024-10-13 17:52:26.279855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.588 [2024-10-13 17:52:26.280486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.588 [2024-10-13 17:52:26.280520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:36.588 [2024-10-13 17:52:26.280533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.584 ms 00:22:36.588 [2024-10-13 17:52:26.280541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.588 [2024-10-13 17:52:26.372588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.588 [2024-10-13 17:52:26.372640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:36.588 [2024-10-13 17:52:26.372660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.987 ms 00:22:36.588 [2024-10-13 17:52:26.372669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.849 [2024-10-13 17:52:26.401370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.849 [2024-10-13 17:52:26.401421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:36.849 [2024-10-13 17:52:26.401440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.618 ms 00:22:36.849 [2024-10-13 17:52:26.401449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.849 [2024-10-13 17:52:26.426939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.849 [2024-10-13 17:52:26.427002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:36.849 [2024-10-13 17:52:26.427017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.436 ms 00:22:36.849 [2024-10-13 17:52:26.427027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.849 [2024-10-13 17:52:26.453295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.849 [2024-10-13 17:52:26.453338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:36.849 [2024-10-13 17:52:26.453354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.217 ms 00:22:36.849 [2024-10-13 17:52:26.453361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.849 [2024-10-13 17:52:26.453418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.849 [2024-10-13 17:52:26.453428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:36.849 [2024-10-13 17:52:26.453444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:36.849 [2024-10-13 17:52:26.453452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.849 [2024-10-13 17:52:26.453570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.849 [2024-10-13 17:52:26.453583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:36.849 [2024-10-13 17:52:26.453595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:22:36.849 [2024-10-13 17:52:26.453604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.849 [2024-10-13 17:52:26.454970] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4440.045 ms, result 0 00:22:36.849 { 00:22:36.849 "name": "ftl0", 00:22:36.849 "uuid": "4cf00a7e-b346-4e16-a62c-1327e6d2445c" 00:22:36.849 } 00:22:36.849 17:52:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:22:36.849 17:52:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:37.110 17:52:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:22:37.110 17:52:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:22:37.110 17:52:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:22:37.371 /dev/nbd0 00:22:37.371 17:52:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:22:37.371 17:52:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:22:37.371 17:52:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:22:37.371 17:52:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:37.371 17:52:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:37.371 17:52:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:22:37.371 17:52:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:22:37.371 17:52:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:37.371 17:52:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:37.371 17:52:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:22:37.371 1+0 records in 00:22:37.371 1+0 records out 00:22:37.371 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412386 s, 9.9 MB/s 00:22:37.371 17:52:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:37.371 17:52:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:22:37.371 17:52:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:37.371 17:52:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:37.371 17:52:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:22:37.371 17:52:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:22:37.371 [2024-10-13 17:52:27.038649] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:22:37.371 [2024-10-13 17:52:27.039340] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78190 ] 00:22:37.632 [2024-10-13 17:52:27.196497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.632 [2024-10-13 17:52:27.345616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:39.017  [2024-10-13T17:52:29.775Z] Copying: 193/1024 [MB] (193 MBps) [2024-10-13T17:52:30.718Z] Copying: 418/1024 [MB] (224 MBps) [2024-10-13T17:52:31.661Z] Copying: 674/1024 [MB] (256 MBps) [2024-10-13T17:52:32.233Z] Copying: 925/1024 [MB] (250 MBps) [2024-10-13T17:52:32.854Z] Copying: 1024/1024 [MB] (average 233 MBps) 00:22:43.040 00:22:43.040 17:52:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:45.581 17:52:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:22:45.581 [2024-10-13 17:52:34.897611] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:22:45.581 [2024-10-13 17:52:34.897737] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78277 ] 00:22:45.581 [2024-10-13 17:52:35.046871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.581 [2024-10-13 17:52:35.140519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:46.523  [2024-10-13T17:52:37.723Z] Copying: 15/1024 [MB] (15 MBps) [2024-10-13T17:52:38.666Z] Copying: 30/1024 [MB] (15 MBps) [2024-10-13T17:52:39.609Z] Copying: 44/1024 [MB] (14 MBps) [2024-10-13T17:52:40.552Z] Copying: 57/1024 [MB] (12 MBps) [2024-10-13T17:52:41.494Z] Copying: 70/1024 [MB] (12 MBps) [2024-10-13T17:52:42.437Z] Copying: 83/1024 [MB] (12 MBps) [2024-10-13T17:52:43.380Z] Copying: 103/1024 [MB] (20 MBps) [2024-10-13T17:52:44.767Z] Copying: 116/1024 [MB] (13 MBps) [2024-10-13T17:52:45.337Z] Copying: 130/1024 [MB] (13 MBps) [2024-10-13T17:52:46.722Z] Copying: 146/1024 [MB] (16 MBps) [2024-10-13T17:52:47.664Z] Copying: 159/1024 [MB] (13 MBps) [2024-10-13T17:52:48.607Z] Copying: 176/1024 [MB] (16 MBps) [2024-10-13T17:52:49.551Z] Copying: 187/1024 [MB] (11 MBps) [2024-10-13T17:52:50.493Z] Copying: 199/1024 [MB] (11 MBps) [2024-10-13T17:52:51.435Z] Copying: 212/1024 [MB] (13 MBps) [2024-10-13T17:52:52.379Z] Copying: 223/1024 [MB] (10 MBps) [2024-10-13T17:52:53.767Z] Copying: 236/1024 [MB] (12 MBps) [2024-10-13T17:52:54.382Z] Copying: 248/1024 [MB] (12 MBps) [2024-10-13T17:52:55.770Z] Copying: 260/1024 [MB] (11 MBps) [2024-10-13T17:52:56.343Z] Copying: 271/1024 [MB] (11 MBps) [2024-10-13T17:52:57.730Z] Copying: 282/1024 [MB] (10 MBps) [2024-10-13T17:52:58.674Z] Copying: 296/1024 [MB] (14 MBps) [2024-10-13T17:52:59.617Z] Copying: 309/1024 [MB] (13 MBps) [2024-10-13T17:53:00.561Z] Copying: 323/1024 [MB] (13 MBps) [2024-10-13T17:53:01.506Z] Copying: 338/1024 [MB] (14 MBps) [2024-10-13T17:53:02.449Z] Copying: 351/1024 [MB] (13 MBps) [2024-10-13T17:53:03.393Z] Copying: 368/1024 [MB] (17 MBps) [2024-10-13T17:53:04.338Z] Copying: 381/1024 [MB] (12 MBps) [2024-10-13T17:53:05.727Z] Copying: 393/1024 [MB] (11 MBps) [2024-10-13T17:53:06.672Z] Copying: 406/1024 [MB] (13 MBps) [2024-10-13T17:53:07.615Z] Copying: 417/1024 [MB] (10 MBps) [2024-10-13T17:53:08.556Z] Copying: 432/1024 [MB] (15 MBps) [2024-10-13T17:53:09.500Z] Copying: 447/1024 [MB] (14 MBps) [2024-10-13T17:53:10.444Z] Copying: 459/1024 [MB] (12 MBps) [2024-10-13T17:53:11.388Z] Copying: 473/1024 [MB] (14 MBps) [2024-10-13T17:53:12.331Z] Copying: 486/1024 [MB] (12 MBps) [2024-10-13T17:53:13.717Z] Copying: 505/1024 [MB] (18 MBps) [2024-10-13T17:53:14.699Z] Copying: 524/1024 [MB] (19 MBps) [2024-10-13T17:53:15.641Z] Copying: 543/1024 [MB] (19 MBps) [2024-10-13T17:53:16.585Z] Copying: 560/1024 [MB] (17 MBps) [2024-10-13T17:53:17.528Z] Copying: 575/1024 [MB] (14 MBps) [2024-10-13T17:53:18.477Z] Copying: 596/1024 [MB] (20 MBps) [2024-10-13T17:53:19.420Z] Copying: 610/1024 [MB] (14 MBps) [2024-10-13T17:53:20.363Z] Copying: 625/1024 [MB] (14 MBps) [2024-10-13T17:53:21.743Z] Copying: 638/1024 [MB] (13 MBps) [2024-10-13T17:53:22.689Z] Copying: 665/1024 [MB] (26 MBps) [2024-10-13T17:53:23.632Z] Copying: 681/1024 [MB] (16 MBps) [2024-10-13T17:53:24.575Z] Copying: 696/1024 [MB] (14 MBps) [2024-10-13T17:53:25.510Z] Copying: 714/1024 [MB] (17 MBps) [2024-10-13T17:53:26.447Z] Copying: 741/1024 [MB] (27 MBps) [2024-10-13T17:53:27.391Z] Copying: 768/1024 [MB] (26 MBps) [2024-10-13T17:53:28.335Z] Copying: 784/1024 [MB] (15 MBps) [2024-10-13T17:53:29.715Z] Copying: 798/1024 [MB] (14 MBps) [2024-10-13T17:53:30.657Z] Copying: 817/1024 [MB] (18 MBps) [2024-10-13T17:53:31.601Z] Copying: 843/1024 [MB] (25 MBps) [2024-10-13T17:53:32.542Z] Copying: 862/1024 [MB] (19 MBps) [2024-10-13T17:53:33.491Z] Copying: 886/1024 [MB] (23 MBps) [2024-10-13T17:53:34.477Z] Copying: 903/1024 [MB] (17 MBps) [2024-10-13T17:53:35.422Z] Copying: 921/1024 [MB] (17 MBps) [2024-10-13T17:53:36.365Z] Copying: 941/1024 [MB] (20 MBps) [2024-10-13T17:53:37.745Z] Copying: 958/1024 [MB] (16 MBps) [2024-10-13T17:53:38.689Z] Copying: 984/1024 [MB] (25 MBps) [2024-10-13T17:53:39.633Z] Copying: 1001/1024 [MB] (17 MBps) [2024-10-13T17:53:39.893Z] Copying: 1014/1024 [MB] (13 MBps) [2024-10-13T17:53:40.462Z] Copying: 1024/1024 [MB] (average 15 MBps) 00:23:50.648 00:23:50.908 17:53:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:23:50.908 17:53:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:23:50.908 17:53:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:51.169 [2024-10-13 17:53:40.881578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.169 [2024-10-13 17:53:40.881636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:51.169 [2024-10-13 17:53:40.881652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:51.169 [2024-10-13 17:53:40.881663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.169 [2024-10-13 17:53:40.881688] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:51.169 [2024-10-13 17:53:40.884658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.169 [2024-10-13 17:53:40.884696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:51.169 [2024-10-13 17:53:40.884709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.948 ms 00:23:51.169 [2024-10-13 17:53:40.884717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.169 [2024-10-13 17:53:40.887391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.169 [2024-10-13 17:53:40.887429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:51.169 [2024-10-13 17:53:40.887442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.641 ms 00:23:51.169 [2024-10-13 17:53:40.887451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.169 [2024-10-13 17:53:40.904479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.169 [2024-10-13 17:53:40.904519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:51.169 [2024-10-13 17:53:40.904532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.006 ms 00:23:51.169 [2024-10-13 17:53:40.904543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.169 [2024-10-13 17:53:40.910705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.169 [2024-10-13 17:53:40.910737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:51.169 [2024-10-13 17:53:40.910749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.114 ms 00:23:51.169 [2024-10-13 17:53:40.910757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.169 [2024-10-13 17:53:40.936138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.169 [2024-10-13 17:53:40.936177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:51.169 [2024-10-13 17:53:40.936190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.268 ms 00:23:51.169 [2024-10-13 17:53:40.936199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.169 [2024-10-13 17:53:40.952320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.169 [2024-10-13 17:53:40.952361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:51.169 [2024-10-13 17:53:40.952376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.077 ms 00:23:51.169 [2024-10-13 17:53:40.952384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.169 [2024-10-13 17:53:40.952538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.169 [2024-10-13 17:53:40.952552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:51.169 [2024-10-13 17:53:40.952574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:23:51.169 [2024-10-13 17:53:40.952582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.169 [2024-10-13 17:53:40.976252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.169 [2024-10-13 17:53:40.976289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:51.169 [2024-10-13 17:53:40.976301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.650 ms 00:23:51.169 [2024-10-13 17:53:40.976309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.430 [2024-10-13 17:53:40.999489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.430 [2024-10-13 17:53:40.999525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:51.430 [2024-10-13 17:53:40.999537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.141 ms 00:23:51.430 [2024-10-13 17:53:40.999548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.430 [2024-10-13 17:53:41.023016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.430 [2024-10-13 17:53:41.023054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:51.430 [2024-10-13 17:53:41.023066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.413 ms 00:23:51.430 [2024-10-13 17:53:41.023073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.430 [2024-10-13 17:53:41.046540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.430 [2024-10-13 17:53:41.046584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:51.430 [2024-10-13 17:53:41.046598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.385 ms 00:23:51.430 [2024-10-13 17:53:41.046606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.430 [2024-10-13 17:53:41.046647] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:51.430 [2024-10-13 17:53:41.046662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:51.430 [2024-10-13 17:53:41.046675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:51.430 [2024-10-13 17:53:41.046684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:51.430 [2024-10-13 17:53:41.046694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:51.430 [2024-10-13 17:53:41.046702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:51.430 [2024-10-13 17:53:41.046712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:51.430 [2024-10-13 17:53:41.046720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:51.430 [2024-10-13 17:53:41.046733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.046741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.046750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.046758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.046768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.046775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.046785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.046791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.046800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.046808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.046817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.046824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.046834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.046841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.046852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.046860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.046871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.046878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.046888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.046896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.046905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.046913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.046923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.046932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.046942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.046950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.046959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.046966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.046976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.046983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.046992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:51.431 [2024-10-13 17:53:41.047624] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:51.431 [2024-10-13 17:53:41.047634] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4cf00a7e-b346-4e16-a62c-1327e6d2445c 00:23:51.431 [2024-10-13 17:53:41.047643] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:51.432 [2024-10-13 17:53:41.047655] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:51.432 [2024-10-13 17:53:41.047662] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:51.432 [2024-10-13 17:53:41.047672] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:51.432 [2024-10-13 17:53:41.047680] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:51.432 [2024-10-13 17:53:41.047690] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:51.432 [2024-10-13 17:53:41.047700] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:51.432 [2024-10-13 17:53:41.047708] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:51.432 [2024-10-13 17:53:41.047715] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:51.432 [2024-10-13 17:53:41.047724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.432 [2024-10-13 17:53:41.047732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:51.432 [2024-10-13 17:53:41.047743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.079 ms 00:23:51.432 [2024-10-13 17:53:41.047751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.432 [2024-10-13 17:53:41.061530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.432 [2024-10-13 17:53:41.061577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:51.432 [2024-10-13 17:53:41.061591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.721 ms 00:23:51.432 [2024-10-13 17:53:41.061599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.432 [2024-10-13 17:53:41.062000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.432 [2024-10-13 17:53:41.062016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:51.432 [2024-10-13 17:53:41.062027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:23:51.432 [2024-10-13 17:53:41.062034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.432 [2024-10-13 17:53:41.108931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.432 [2024-10-13 17:53:41.108977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:51.432 [2024-10-13 17:53:41.108992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.432 [2024-10-13 17:53:41.109003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.432 [2024-10-13 17:53:41.109083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.432 [2024-10-13 17:53:41.109092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:51.432 [2024-10-13 17:53:41.109102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.432 [2024-10-13 17:53:41.109111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.432 [2024-10-13 17:53:41.109197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.432 [2024-10-13 17:53:41.109209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:51.432 [2024-10-13 17:53:41.109220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.432 [2024-10-13 17:53:41.109228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.432 [2024-10-13 17:53:41.109255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.432 [2024-10-13 17:53:41.109263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:51.432 [2024-10-13 17:53:41.109274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.432 [2024-10-13 17:53:41.109282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.432 [2024-10-13 17:53:41.201065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.432 [2024-10-13 17:53:41.201134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:51.432 [2024-10-13 17:53:41.201152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.432 [2024-10-13 17:53:41.201166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.693 [2024-10-13 17:53:41.276941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.693 [2024-10-13 17:53:41.277009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:51.693 [2024-10-13 17:53:41.277027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.693 [2024-10-13 17:53:41.277038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.693 [2024-10-13 17:53:41.277181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.693 [2024-10-13 17:53:41.277193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:51.693 [2024-10-13 17:53:41.277205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.693 [2024-10-13 17:53:41.277215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.693 [2024-10-13 17:53:41.277276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.693 [2024-10-13 17:53:41.277291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:51.693 [2024-10-13 17:53:41.277302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.693 [2024-10-13 17:53:41.277311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.693 [2024-10-13 17:53:41.277435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.693 [2024-10-13 17:53:41.277445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:51.693 [2024-10-13 17:53:41.277457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.693 [2024-10-13 17:53:41.277465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.693 [2024-10-13 17:53:41.277509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.693 [2024-10-13 17:53:41.277520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:51.693 [2024-10-13 17:53:41.277534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.693 [2024-10-13 17:53:41.277543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.693 [2024-10-13 17:53:41.277626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.693 [2024-10-13 17:53:41.277645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:51.693 [2024-10-13 17:53:41.277657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.693 [2024-10-13 17:53:41.277665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.693 [2024-10-13 17:53:41.277733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.693 [2024-10-13 17:53:41.277756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:51.693 [2024-10-13 17:53:41.277768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.693 [2024-10-13 17:53:41.277776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.693 [2024-10-13 17:53:41.277965] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 396.332 ms, result 0 00:23:51.693 true 00:23:51.693 17:53:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 78043 00:23:51.693 17:53:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid78043 00:23:51.693 17:53:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:23:51.693 [2024-10-13 17:53:41.394353] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:23:51.693 [2024-10-13 17:53:41.394522] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78966 ] 00:23:51.954 [2024-10-13 17:53:41.547423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.954 [2024-10-13 17:53:41.680313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.340  [2024-10-13T17:53:44.097Z] Copying: 223/1024 [MB] (223 MBps) [2024-10-13T17:53:45.040Z] Copying: 480/1024 [MB] (256 MBps) [2024-10-13T17:53:45.981Z] Copying: 734/1024 [MB] (253 MBps) [2024-10-13T17:53:46.240Z] Copying: 987/1024 [MB] (252 MBps) [2024-10-13T17:53:46.811Z] Copying: 1024/1024 [MB] (average 246 MBps) 00:23:56.997 00:23:56.997 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 78043 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:23:56.997 17:53:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:57.259 [2024-10-13 17:53:46.817733] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:23:57.259 [2024-10-13 17:53:46.817854] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79025 ] 00:23:57.259 [2024-10-13 17:53:46.970271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.520 [2024-10-13 17:53:47.072515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.520 [2024-10-13 17:53:47.304130] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:57.520 [2024-10-13 17:53:47.304184] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:57.780 [2024-10-13 17:53:47.367383] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:57.780 [2024-10-13 17:53:47.367640] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:57.780 [2024-10-13 17:53:47.367853] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:58.043 [2024-10-13 17:53:47.690682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.043 [2024-10-13 17:53:47.690722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:58.043 [2024-10-13 17:53:47.690734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:58.043 [2024-10-13 17:53:47.690741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.043 [2024-10-13 17:53:47.690782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.043 [2024-10-13 17:53:47.690790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:58.043 [2024-10-13 17:53:47.690796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:23:58.043 [2024-10-13 17:53:47.690802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.043 [2024-10-13 17:53:47.690815] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:58.043 [2024-10-13 17:53:47.691362] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:58.043 [2024-10-13 17:53:47.691374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.043 [2024-10-13 17:53:47.691380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:58.043 [2024-10-13 17:53:47.691388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:23:58.043 [2024-10-13 17:53:47.691394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.043 [2024-10-13 17:53:47.692724] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:58.043 [2024-10-13 17:53:47.703021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.043 [2024-10-13 17:53:47.703047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:58.043 [2024-10-13 17:53:47.703060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.298 ms 00:23:58.043 [2024-10-13 17:53:47.703066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.043 [2024-10-13 17:53:47.703112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.043 [2024-10-13 17:53:47.703120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:58.043 [2024-10-13 17:53:47.703127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:58.043 [2024-10-13 17:53:47.703133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.043 [2024-10-13 17:53:47.709453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.043 [2024-10-13 17:53:47.709481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:58.043 [2024-10-13 17:53:47.709489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.279 ms 00:23:58.043 [2024-10-13 17:53:47.709495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.043 [2024-10-13 17:53:47.709553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.043 [2024-10-13 17:53:47.709575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:58.043 [2024-10-13 17:53:47.709582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:23:58.043 [2024-10-13 17:53:47.709588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.043 [2024-10-13 17:53:47.709627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.043 [2024-10-13 17:53:47.709635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:58.043 [2024-10-13 17:53:47.709643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:58.043 [2024-10-13 17:53:47.709649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.043 [2024-10-13 17:53:47.709665] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:58.043 [2024-10-13 17:53:47.712665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.043 [2024-10-13 17:53:47.712689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:58.043 [2024-10-13 17:53:47.712697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.004 ms 00:23:58.043 [2024-10-13 17:53:47.712703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.043 [2024-10-13 17:53:47.712729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.043 [2024-10-13 17:53:47.712735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:58.043 [2024-10-13 17:53:47.712742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:58.043 [2024-10-13 17:53:47.712748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.043 [2024-10-13 17:53:47.712765] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:58.043 [2024-10-13 17:53:47.712784] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:58.043 [2024-10-13 17:53:47.712816] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:58.043 [2024-10-13 17:53:47.712829] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:58.043 [2024-10-13 17:53:47.712912] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:58.043 [2024-10-13 17:53:47.712921] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:58.043 [2024-10-13 17:53:47.712930] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:58.043 [2024-10-13 17:53:47.712937] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:58.043 [2024-10-13 17:53:47.712944] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:58.043 [2024-10-13 17:53:47.712953] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:58.043 [2024-10-13 17:53:47.712959] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:58.043 [2024-10-13 17:53:47.712964] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:58.043 [2024-10-13 17:53:47.712970] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:58.043 [2024-10-13 17:53:47.712975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.043 [2024-10-13 17:53:47.712981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:58.043 [2024-10-13 17:53:47.712987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:23:58.043 [2024-10-13 17:53:47.712993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.043 [2024-10-13 17:53:47.713057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.043 [2024-10-13 17:53:47.713063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:58.043 [2024-10-13 17:53:47.713069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:23:58.043 [2024-10-13 17:53:47.713077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.043 [2024-10-13 17:53:47.713153] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:58.043 [2024-10-13 17:53:47.713161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:58.043 [2024-10-13 17:53:47.713168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:58.043 [2024-10-13 17:53:47.713174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:58.043 [2024-10-13 17:53:47.713180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:58.043 [2024-10-13 17:53:47.713185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:58.043 [2024-10-13 17:53:47.713190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:58.043 [2024-10-13 17:53:47.713196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:58.043 [2024-10-13 17:53:47.713202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:58.043 [2024-10-13 17:53:47.713207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:58.043 [2024-10-13 17:53:47.713213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:58.043 [2024-10-13 17:53:47.713222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:58.043 [2024-10-13 17:53:47.713226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:58.043 [2024-10-13 17:53:47.713231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:58.043 [2024-10-13 17:53:47.713236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:58.043 [2024-10-13 17:53:47.713243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:58.043 [2024-10-13 17:53:47.713248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:58.043 [2024-10-13 17:53:47.713253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:58.043 [2024-10-13 17:53:47.713259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:58.043 [2024-10-13 17:53:47.713264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:58.043 [2024-10-13 17:53:47.713269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:58.043 [2024-10-13 17:53:47.713274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:58.043 [2024-10-13 17:53:47.713279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:58.043 [2024-10-13 17:53:47.713283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:58.043 [2024-10-13 17:53:47.713288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:58.043 [2024-10-13 17:53:47.713293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:58.043 [2024-10-13 17:53:47.713298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:58.043 [2024-10-13 17:53:47.713303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:58.043 [2024-10-13 17:53:47.713308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:58.043 [2024-10-13 17:53:47.713313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:58.043 [2024-10-13 17:53:47.713318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:58.043 [2024-10-13 17:53:47.713322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:58.043 [2024-10-13 17:53:47.713327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:58.044 [2024-10-13 17:53:47.713332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:58.044 [2024-10-13 17:53:47.713337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:58.044 [2024-10-13 17:53:47.713341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:58.044 [2024-10-13 17:53:47.713346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:58.044 [2024-10-13 17:53:47.713351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:58.044 [2024-10-13 17:53:47.713356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:58.044 [2024-10-13 17:53:47.713361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:58.044 [2024-10-13 17:53:47.713366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:58.044 [2024-10-13 17:53:47.713370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:58.044 [2024-10-13 17:53:47.713376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:58.044 [2024-10-13 17:53:47.713381] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:58.044 [2024-10-13 17:53:47.713387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:58.044 [2024-10-13 17:53:47.713392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:58.044 [2024-10-13 17:53:47.713398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:58.044 [2024-10-13 17:53:47.713407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:58.044 [2024-10-13 17:53:47.713412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:58.044 [2024-10-13 17:53:47.713417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:58.044 [2024-10-13 17:53:47.713423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:58.044 [2024-10-13 17:53:47.713428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:58.044 [2024-10-13 17:53:47.713433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:58.044 [2024-10-13 17:53:47.713440] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:58.044 [2024-10-13 17:53:47.713446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:58.044 [2024-10-13 17:53:47.713453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:58.044 [2024-10-13 17:53:47.713458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:58.044 [2024-10-13 17:53:47.713464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:58.044 [2024-10-13 17:53:47.713469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:58.044 [2024-10-13 17:53:47.713474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:58.044 [2024-10-13 17:53:47.713480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:58.044 [2024-10-13 17:53:47.713485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:58.044 [2024-10-13 17:53:47.713491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:58.044 [2024-10-13 17:53:47.713496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:58.044 [2024-10-13 17:53:47.713501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:58.044 [2024-10-13 17:53:47.713507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:58.044 [2024-10-13 17:53:47.713512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:58.044 [2024-10-13 17:53:47.713517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:58.044 [2024-10-13 17:53:47.713523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:58.044 [2024-10-13 17:53:47.713528] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:58.044 [2024-10-13 17:53:47.713534] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:58.044 [2024-10-13 17:53:47.713541] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:58.044 [2024-10-13 17:53:47.713547] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:58.044 [2024-10-13 17:53:47.713552] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:58.044 [2024-10-13 17:53:47.713574] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:58.044 [2024-10-13 17:53:47.713580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.044 [2024-10-13 17:53:47.713585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:58.044 [2024-10-13 17:53:47.713594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.479 ms 00:23:58.044 [2024-10-13 17:53:47.713600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.044 [2024-10-13 17:53:47.738637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.044 [2024-10-13 17:53:47.738768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:58.044 [2024-10-13 17:53:47.738814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.988 ms 00:23:58.044 [2024-10-13 17:53:47.738832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.044 [2024-10-13 17:53:47.738912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.044 [2024-10-13 17:53:47.738929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:58.044 [2024-10-13 17:53:47.738949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:23:58.044 [2024-10-13 17:53:47.738965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.044 [2024-10-13 17:53:47.783624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.044 [2024-10-13 17:53:47.783757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:58.044 [2024-10-13 17:53:47.783815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.602 ms 00:23:58.044 [2024-10-13 17:53:47.783835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.044 [2024-10-13 17:53:47.783888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.044 [2024-10-13 17:53:47.783907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:58.044 [2024-10-13 17:53:47.783923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:58.044 [2024-10-13 17:53:47.783938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.044 [2024-10-13 17:53:47.784400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.044 [2024-10-13 17:53:47.784487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:58.044 [2024-10-13 17:53:47.784529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.390 ms 00:23:58.044 [2024-10-13 17:53:47.784548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.044 [2024-10-13 17:53:47.784687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.044 [2024-10-13 17:53:47.784712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:58.044 [2024-10-13 17:53:47.784752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:23:58.044 [2024-10-13 17:53:47.784769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.044 [2024-10-13 17:53:47.796926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.044 [2024-10-13 17:53:47.797022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:58.044 [2024-10-13 17:53:47.797060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.130 ms 00:23:58.044 [2024-10-13 17:53:47.797078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.044 [2024-10-13 17:53:47.807639] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:58.044 [2024-10-13 17:53:47.807749] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:58.044 [2024-10-13 17:53:47.807803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.044 [2024-10-13 17:53:47.807821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:58.044 [2024-10-13 17:53:47.807837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.625 ms 00:23:58.044 [2024-10-13 17:53:47.807851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.044 [2024-10-13 17:53:47.826597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.044 [2024-10-13 17:53:47.826695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:58.044 [2024-10-13 17:53:47.826747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.708 ms 00:23:58.044 [2024-10-13 17:53:47.826764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.044 [2024-10-13 17:53:47.835549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.044 [2024-10-13 17:53:47.835647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:58.044 [2024-10-13 17:53:47.835688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.755 ms 00:23:58.044 [2024-10-13 17:53:47.835705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.044 [2024-10-13 17:53:47.844399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.044 [2024-10-13 17:53:47.844492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:58.044 [2024-10-13 17:53:47.844536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.661 ms 00:23:58.044 [2024-10-13 17:53:47.844553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.044 [2024-10-13 17:53:47.845070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.044 [2024-10-13 17:53:47.845149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:58.044 [2024-10-13 17:53:47.845198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:23:58.044 [2024-10-13 17:53:47.845216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.306 [2024-10-13 17:53:47.893220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.306 [2024-10-13 17:53:47.893391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:58.306 [2024-10-13 17:53:47.893435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.977 ms 00:23:58.306 [2024-10-13 17:53:47.893455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.306 [2024-10-13 17:53:47.902153] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:58.306 [2024-10-13 17:53:47.904802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.307 [2024-10-13 17:53:47.904889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:58.307 [2024-10-13 17:53:47.904930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.287 ms 00:23:58.307 [2024-10-13 17:53:47.904948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.307 [2024-10-13 17:53:47.905050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.307 [2024-10-13 17:53:47.905079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:58.307 [2024-10-13 17:53:47.905096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:58.307 [2024-10-13 17:53:47.905136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.307 [2024-10-13 17:53:47.905216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.307 [2024-10-13 17:53:47.905236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:58.307 [2024-10-13 17:53:47.905287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:23:58.307 [2024-10-13 17:53:47.905304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.307 [2024-10-13 17:53:47.905327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.307 [2024-10-13 17:53:47.905335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:58.307 [2024-10-13 17:53:47.905346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:58.307 [2024-10-13 17:53:47.905353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.307 [2024-10-13 17:53:47.905385] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:58.307 [2024-10-13 17:53:47.905393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.307 [2024-10-13 17:53:47.905400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:58.307 [2024-10-13 17:53:47.905407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:58.307 [2024-10-13 17:53:47.905413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.307 [2024-10-13 17:53:47.924508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.307 [2024-10-13 17:53:47.924704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:58.307 [2024-10-13 17:53:47.924744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.080 ms 00:23:58.307 [2024-10-13 17:53:47.924762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.307 [2024-10-13 17:53:47.924831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.307 [2024-10-13 17:53:47.924901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:58.307 [2024-10-13 17:53:47.924921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:58.307 [2024-10-13 17:53:47.924936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.307 [2024-10-13 17:53:47.925913] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 234.826 ms, result 0 00:23:59.248  [2024-10-13T17:53:50.006Z] Copying: 12/1024 [MB] (12 MBps) [2024-10-13T17:53:50.949Z] Copying: 23/1024 [MB] (10 MBps) [2024-10-13T17:53:52.334Z] Copying: 42/1024 [MB] (18 MBps) [2024-10-13T17:53:53.316Z] Copying: 61/1024 [MB] (19 MBps) [2024-10-13T17:53:54.265Z] Copying: 77/1024 [MB] (16 MBps) [2024-10-13T17:53:55.210Z] Copying: 100/1024 [MB] (22 MBps) [2024-10-13T17:53:56.155Z] Copying: 115/1024 [MB] (14 MBps) [2024-10-13T17:53:57.099Z] Copying: 135/1024 [MB] (19 MBps) [2024-10-13T17:53:58.044Z] Copying: 150/1024 [MB] (14 MBps) [2024-10-13T17:53:58.986Z] Copying: 163/1024 [MB] (13 MBps) [2024-10-13T17:54:00.372Z] Copying: 190/1024 [MB] (27 MBps) [2024-10-13T17:54:00.945Z] Copying: 212/1024 [MB] (22 MBps) [2024-10-13T17:54:02.334Z] Copying: 226/1024 [MB] (13 MBps) [2024-10-13T17:54:03.279Z] Copying: 239/1024 [MB] (12 MBps) [2024-10-13T17:54:04.224Z] Copying: 255/1024 [MB] (16 MBps) [2024-10-13T17:54:05.167Z] Copying: 271/1024 [MB] (16 MBps) [2024-10-13T17:54:06.109Z] Copying: 288/1024 [MB] (16 MBps) [2024-10-13T17:54:07.049Z] Copying: 302/1024 [MB] (13 MBps) [2024-10-13T17:54:07.990Z] Copying: 316/1024 [MB] (13 MBps) [2024-10-13T17:54:09.379Z] Copying: 352/1024 [MB] (35 MBps) [2024-10-13T17:54:09.976Z] Copying: 372/1024 [MB] (20 MBps) [2024-10-13T17:54:11.365Z] Copying: 386/1024 [MB] (13 MBps) [2024-10-13T17:54:12.308Z] Copying: 405/1024 [MB] (19 MBps) [2024-10-13T17:54:13.251Z] Copying: 417/1024 [MB] (11 MBps) [2024-10-13T17:54:14.195Z] Copying: 446/1024 [MB] (28 MBps) [2024-10-13T17:54:15.138Z] Copying: 475/1024 [MB] (29 MBps) [2024-10-13T17:54:16.083Z] Copying: 498/1024 [MB] (22 MBps) [2024-10-13T17:54:17.029Z] Copying: 529/1024 [MB] (30 MBps) [2024-10-13T17:54:17.973Z] Copying: 543/1024 [MB] (14 MBps) [2024-10-13T17:54:19.360Z] Copying: 557/1024 [MB] (14 MBps) [2024-10-13T17:54:20.305Z] Copying: 570/1024 [MB] (12 MBps) [2024-10-13T17:54:21.248Z] Copying: 582/1024 [MB] (12 MBps) [2024-10-13T17:54:22.193Z] Copying: 596/1024 [MB] (13 MBps) [2024-10-13T17:54:23.138Z] Copying: 612/1024 [MB] (16 MBps) [2024-10-13T17:54:24.080Z] Copying: 626/1024 [MB] (13 MBps) [2024-10-13T17:54:25.024Z] Copying: 641/1024 [MB] (15 MBps) [2024-10-13T17:54:25.970Z] Copying: 655/1024 [MB] (13 MBps) [2024-10-13T17:54:27.377Z] Copying: 671/1024 [MB] (16 MBps) [2024-10-13T17:54:27.963Z] Copying: 688/1024 [MB] (17 MBps) [2024-10-13T17:54:29.350Z] Copying: 700/1024 [MB] (11 MBps) [2024-10-13T17:54:30.295Z] Copying: 713/1024 [MB] (12 MBps) [2024-10-13T17:54:31.240Z] Copying: 728/1024 [MB] (15 MBps) [2024-10-13T17:54:32.184Z] Copying: 740/1024 [MB] (11 MBps) [2024-10-13T17:54:33.127Z] Copying: 755/1024 [MB] (15 MBps) [2024-10-13T17:54:34.072Z] Copying: 780/1024 [MB] (25 MBps) [2024-10-13T17:54:35.016Z] Copying: 792/1024 [MB] (11 MBps) [2024-10-13T17:54:35.961Z] Copying: 807/1024 [MB] (14 MBps) [2024-10-13T17:54:37.349Z] Copying: 823/1024 [MB] (16 MBps) [2024-10-13T17:54:38.293Z] Copying: 840/1024 [MB] (16 MBps) [2024-10-13T17:54:39.236Z] Copying: 860/1024 [MB] (20 MBps) [2024-10-13T17:54:40.181Z] Copying: 882/1024 [MB] (22 MBps) [2024-10-13T17:54:41.124Z] Copying: 895/1024 [MB] (13 MBps) [2024-10-13T17:54:42.068Z] Copying: 923/1024 [MB] (27 MBps) [2024-10-13T17:54:43.014Z] Copying: 950/1024 [MB] (27 MBps) [2024-10-13T17:54:43.957Z] Copying: 963/1024 [MB] (12 MBps) [2024-10-13T17:54:44.959Z] Copying: 978/1024 [MB] (14 MBps) [2024-10-13T17:54:46.376Z] Copying: 1006/1024 [MB] (28 MBps) [2024-10-13T17:54:46.637Z] Copying: 1023/1024 [MB] (16 MBps) [2024-10-13T17:54:46.637Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-10-13 17:54:46.614869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.823 [2024-10-13 17:54:46.614976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:56.823 [2024-10-13 17:54:46.614996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:56.823 [2024-10-13 17:54:46.615007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.823 [2024-10-13 17:54:46.617103] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:56.823 [2024-10-13 17:54:46.623840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.823 [2024-10-13 17:54:46.623907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:56.823 [2024-10-13 17:54:46.623921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.674 ms 00:24:56.823 [2024-10-13 17:54:46.623931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.084 [2024-10-13 17:54:46.637040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.084 [2024-10-13 17:54:46.637105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:57.084 [2024-10-13 17:54:46.637119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.194 ms 00:24:57.084 [2024-10-13 17:54:46.637129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.084 [2024-10-13 17:54:46.661150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.084 [2024-10-13 17:54:46.661212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:57.084 [2024-10-13 17:54:46.661227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.001 ms 00:24:57.084 [2024-10-13 17:54:46.661239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.084 [2024-10-13 17:54:46.667441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.084 [2024-10-13 17:54:46.667486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:57.084 [2024-10-13 17:54:46.667510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.161 ms 00:24:57.084 [2024-10-13 17:54:46.667519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.084 [2024-10-13 17:54:46.695531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.084 [2024-10-13 17:54:46.695592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:57.084 [2024-10-13 17:54:46.695606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.954 ms 00:24:57.084 [2024-10-13 17:54:46.695615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.084 [2024-10-13 17:54:46.713099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.084 [2024-10-13 17:54:46.713152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:57.084 [2024-10-13 17:54:46.713166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.434 ms 00:24:57.084 [2024-10-13 17:54:46.713175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.347 [2024-10-13 17:54:46.919435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.347 [2024-10-13 17:54:46.919501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:57.347 [2024-10-13 17:54:46.919518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 206.226 ms 00:24:57.347 [2024-10-13 17:54:46.919529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.347 [2024-10-13 17:54:46.946266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.347 [2024-10-13 17:54:46.946317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:57.347 [2024-10-13 17:54:46.946331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.711 ms 00:24:57.347 [2024-10-13 17:54:46.946340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.347 [2024-10-13 17:54:46.972602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.347 [2024-10-13 17:54:46.972654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:57.347 [2024-10-13 17:54:46.972668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.172 ms 00:24:57.347 [2024-10-13 17:54:46.972676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.347 [2024-10-13 17:54:46.997093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.347 [2024-10-13 17:54:46.997145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:57.347 [2024-10-13 17:54:46.997157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.371 ms 00:24:57.347 [2024-10-13 17:54:46.997164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.347 [2024-10-13 17:54:47.021755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.347 [2024-10-13 17:54:47.021811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:57.347 [2024-10-13 17:54:47.021824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.515 ms 00:24:57.347 [2024-10-13 17:54:47.021832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.347 [2024-10-13 17:54:47.021878] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:57.347 [2024-10-13 17:54:47.021896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 99584 / 261120 wr_cnt: 1 state: open 00:24:57.347 [2024-10-13 17:54:47.021909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.021918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.021927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.021937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.021946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.021954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.021962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.021971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.021981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.021989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.021998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.022006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.022015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.022023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.022031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.022039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.022047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.022055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.022062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.022070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.022078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.022086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.022094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.022101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.022110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.022117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.022125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.022133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.022145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.022153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.022161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.022169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.022177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.022184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.022194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.022202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.022210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.022218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.022225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.022234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.022242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.022249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.022257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:57.347 [2024-10-13 17:54:47.022265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:57.348 [2024-10-13 17:54:47.022751] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:57.348 [2024-10-13 17:54:47.022761] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4cf00a7e-b346-4e16-a62c-1327e6d2445c 00:24:57.348 [2024-10-13 17:54:47.022770] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 99584 00:24:57.348 [2024-10-13 17:54:47.022778] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 100544 00:24:57.348 [2024-10-13 17:54:47.022800] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 99584 00:24:57.348 [2024-10-13 17:54:47.022810] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0096 00:24:57.348 [2024-10-13 17:54:47.022818] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:57.348 [2024-10-13 17:54:47.022828] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:57.348 [2024-10-13 17:54:47.022837] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:57.348 [2024-10-13 17:54:47.022844] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:57.348 [2024-10-13 17:54:47.022852] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:57.348 [2024-10-13 17:54:47.022860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.348 [2024-10-13 17:54:47.022868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:57.348 [2024-10-13 17:54:47.022877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.984 ms 00:24:57.348 [2024-10-13 17:54:47.022885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.348 [2024-10-13 17:54:47.037621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.348 [2024-10-13 17:54:47.037676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:57.348 [2024-10-13 17:54:47.037689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.697 ms 00:24:57.348 [2024-10-13 17:54:47.037698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.348 [2024-10-13 17:54:47.038133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.348 [2024-10-13 17:54:47.038145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:57.348 [2024-10-13 17:54:47.038156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.393 ms 00:24:57.348 [2024-10-13 17:54:47.038164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.348 [2024-10-13 17:54:47.077792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.348 [2024-10-13 17:54:47.077847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:57.348 [2024-10-13 17:54:47.077862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.348 [2024-10-13 17:54:47.077872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.348 [2024-10-13 17:54:47.077954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.348 [2024-10-13 17:54:47.077964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:57.348 [2024-10-13 17:54:47.077976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.348 [2024-10-13 17:54:47.077987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.348 [2024-10-13 17:54:47.078066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.348 [2024-10-13 17:54:47.078078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:57.348 [2024-10-13 17:54:47.078088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.348 [2024-10-13 17:54:47.078097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.348 [2024-10-13 17:54:47.078115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.348 [2024-10-13 17:54:47.078123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:57.348 [2024-10-13 17:54:47.078132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.348 [2024-10-13 17:54:47.078141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.610 [2024-10-13 17:54:47.169961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.610 [2024-10-13 17:54:47.170025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:57.610 [2024-10-13 17:54:47.170040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.610 [2024-10-13 17:54:47.170049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.610 [2024-10-13 17:54:47.244453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.610 [2024-10-13 17:54:47.244520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:57.610 [2024-10-13 17:54:47.244534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.610 [2024-10-13 17:54:47.244543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.610 [2024-10-13 17:54:47.244676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.610 [2024-10-13 17:54:47.244692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:57.610 [2024-10-13 17:54:47.244702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.610 [2024-10-13 17:54:47.244710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.610 [2024-10-13 17:54:47.244755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.610 [2024-10-13 17:54:47.244765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:57.610 [2024-10-13 17:54:47.244776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.610 [2024-10-13 17:54:47.244785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.610 [2024-10-13 17:54:47.244893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.610 [2024-10-13 17:54:47.244904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:57.610 [2024-10-13 17:54:47.244918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.610 [2024-10-13 17:54:47.244927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.610 [2024-10-13 17:54:47.244968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.610 [2024-10-13 17:54:47.244979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:57.610 [2024-10-13 17:54:47.244989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.610 [2024-10-13 17:54:47.244998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.610 [2024-10-13 17:54:47.245052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.610 [2024-10-13 17:54:47.245063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:57.610 [2024-10-13 17:54:47.245077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.610 [2024-10-13 17:54:47.245086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.610 [2024-10-13 17:54:47.245145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.610 [2024-10-13 17:54:47.245157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:57.610 [2024-10-13 17:54:47.245166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.610 [2024-10-13 17:54:47.245176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.610 [2024-10-13 17:54:47.245339] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 631.391 ms, result 0 00:24:59.524 00:24:59.524 00:24:59.524 17:54:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:01.440 17:54:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:01.440 [2024-10-13 17:54:51.145040] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:25:01.440 [2024-10-13 17:54:51.145180] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79676 ] 00:25:01.701 [2024-10-13 17:54:51.291996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.701 [2024-10-13 17:54:51.437607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.962 [2024-10-13 17:54:51.773192] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:01.962 [2024-10-13 17:54:51.773288] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:02.224 [2024-10-13 17:54:51.938386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.224 [2024-10-13 17:54:51.938458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:02.224 [2024-10-13 17:54:51.938475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:02.224 [2024-10-13 17:54:51.938489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.224 [2024-10-13 17:54:51.938550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.224 [2024-10-13 17:54:51.938579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:02.224 [2024-10-13 17:54:51.938589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:25:02.224 [2024-10-13 17:54:51.938601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.224 [2024-10-13 17:54:51.938624] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:02.224 [2024-10-13 17:54:51.939403] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:02.224 [2024-10-13 17:54:51.939432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.224 [2024-10-13 17:54:51.939446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:02.224 [2024-10-13 17:54:51.939456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.814 ms 00:25:02.224 [2024-10-13 17:54:51.939464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.224 [2024-10-13 17:54:51.941776] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:02.224 [2024-10-13 17:54:51.957246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.224 [2024-10-13 17:54:51.957306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:02.224 [2024-10-13 17:54:51.957321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.472 ms 00:25:02.224 [2024-10-13 17:54:51.957330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.224 [2024-10-13 17:54:51.957417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.224 [2024-10-13 17:54:51.957428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:02.224 [2024-10-13 17:54:51.957441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:25:02.224 [2024-10-13 17:54:51.957450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.224 [2024-10-13 17:54:51.969321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.224 [2024-10-13 17:54:51.969375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:02.224 [2024-10-13 17:54:51.969387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.787 ms 00:25:02.224 [2024-10-13 17:54:51.969397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.224 [2024-10-13 17:54:51.969493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.224 [2024-10-13 17:54:51.969504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:02.224 [2024-10-13 17:54:51.969514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:25:02.225 [2024-10-13 17:54:51.969526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.225 [2024-10-13 17:54:51.969611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.225 [2024-10-13 17:54:51.969623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:02.225 [2024-10-13 17:54:51.969632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:02.225 [2024-10-13 17:54:51.969640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.225 [2024-10-13 17:54:51.969664] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:02.225 [2024-10-13 17:54:51.974489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.225 [2024-10-13 17:54:51.974535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:02.225 [2024-10-13 17:54:51.974547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.830 ms 00:25:02.225 [2024-10-13 17:54:51.974566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.225 [2024-10-13 17:54:51.974613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.225 [2024-10-13 17:54:51.974623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:02.225 [2024-10-13 17:54:51.974632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:25:02.225 [2024-10-13 17:54:51.974641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.225 [2024-10-13 17:54:51.974684] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:02.225 [2024-10-13 17:54:51.974712] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:02.225 [2024-10-13 17:54:51.974752] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:02.225 [2024-10-13 17:54:51.974774] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:02.225 [2024-10-13 17:54:51.974890] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:02.225 [2024-10-13 17:54:51.974903] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:02.225 [2024-10-13 17:54:51.974916] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:02.225 [2024-10-13 17:54:51.974928] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:02.225 [2024-10-13 17:54:51.974937] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:02.225 [2024-10-13 17:54:51.974947] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:02.225 [2024-10-13 17:54:51.974956] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:02.225 [2024-10-13 17:54:51.974964] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:02.225 [2024-10-13 17:54:51.974973] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:02.225 [2024-10-13 17:54:51.974982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.225 [2024-10-13 17:54:51.974995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:02.225 [2024-10-13 17:54:51.975003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:25:02.225 [2024-10-13 17:54:51.975011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.225 [2024-10-13 17:54:51.975096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.225 [2024-10-13 17:54:51.975106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:02.225 [2024-10-13 17:54:51.975115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:25:02.225 [2024-10-13 17:54:51.975122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.225 [2024-10-13 17:54:51.975231] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:02.225 [2024-10-13 17:54:51.975243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:02.225 [2024-10-13 17:54:51.975255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:02.225 [2024-10-13 17:54:51.975264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:02.225 [2024-10-13 17:54:51.975273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:02.225 [2024-10-13 17:54:51.975280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:02.225 [2024-10-13 17:54:51.975287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:02.225 [2024-10-13 17:54:51.975294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:02.225 [2024-10-13 17:54:51.975302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:02.225 [2024-10-13 17:54:51.975309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:02.225 [2024-10-13 17:54:51.975317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:02.225 [2024-10-13 17:54:51.975324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:02.225 [2024-10-13 17:54:51.975331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:02.225 [2024-10-13 17:54:51.975338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:02.225 [2024-10-13 17:54:51.975352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:02.225 [2024-10-13 17:54:51.975367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:02.225 [2024-10-13 17:54:51.975375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:02.225 [2024-10-13 17:54:51.975382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:02.225 [2024-10-13 17:54:51.975389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:02.225 [2024-10-13 17:54:51.975396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:02.225 [2024-10-13 17:54:51.975403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:02.225 [2024-10-13 17:54:51.975410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:02.225 [2024-10-13 17:54:51.975417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:02.225 [2024-10-13 17:54:51.975424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:02.225 [2024-10-13 17:54:51.975432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:02.225 [2024-10-13 17:54:51.975439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:02.225 [2024-10-13 17:54:51.975445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:02.225 [2024-10-13 17:54:51.975451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:02.225 [2024-10-13 17:54:51.975458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:02.225 [2024-10-13 17:54:51.975465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:02.225 [2024-10-13 17:54:51.975472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:02.225 [2024-10-13 17:54:51.975478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:02.225 [2024-10-13 17:54:51.975486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:02.225 [2024-10-13 17:54:51.975493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:02.225 [2024-10-13 17:54:51.975500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:02.225 [2024-10-13 17:54:51.975507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:02.225 [2024-10-13 17:54:51.975514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:02.225 [2024-10-13 17:54:51.975520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:02.225 [2024-10-13 17:54:51.975527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:02.225 [2024-10-13 17:54:51.975534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:02.225 [2024-10-13 17:54:51.975540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:02.225 [2024-10-13 17:54:51.975547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:02.225 [2024-10-13 17:54:51.975554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:02.225 [2024-10-13 17:54:51.975576] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:02.225 [2024-10-13 17:54:51.975584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:02.225 [2024-10-13 17:54:51.975592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:02.225 [2024-10-13 17:54:51.975605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:02.225 [2024-10-13 17:54:51.975615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:02.225 [2024-10-13 17:54:51.975623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:02.225 [2024-10-13 17:54:51.975630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:02.225 [2024-10-13 17:54:51.975638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:02.225 [2024-10-13 17:54:51.975646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:02.225 [2024-10-13 17:54:51.975653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:02.225 [2024-10-13 17:54:51.975663] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:02.225 [2024-10-13 17:54:51.975673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:02.225 [2024-10-13 17:54:51.975682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:02.225 [2024-10-13 17:54:51.975691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:02.225 [2024-10-13 17:54:51.975699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:02.225 [2024-10-13 17:54:51.975707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:02.225 [2024-10-13 17:54:51.975714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:02.225 [2024-10-13 17:54:51.975722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:02.225 [2024-10-13 17:54:51.975743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:02.225 [2024-10-13 17:54:51.975751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:02.225 [2024-10-13 17:54:51.975760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:02.225 [2024-10-13 17:54:51.975767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:02.225 [2024-10-13 17:54:51.975776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:02.225 [2024-10-13 17:54:51.975783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:02.225 [2024-10-13 17:54:51.975791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:02.225 [2024-10-13 17:54:51.975799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:02.226 [2024-10-13 17:54:51.975807] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:02.226 [2024-10-13 17:54:51.975816] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:02.226 [2024-10-13 17:54:51.975828] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:02.226 [2024-10-13 17:54:51.975837] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:02.226 [2024-10-13 17:54:51.975844] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:02.226 [2024-10-13 17:54:51.975852] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:02.226 [2024-10-13 17:54:51.975861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.226 [2024-10-13 17:54:51.975870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:02.226 [2024-10-13 17:54:51.975879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.700 ms 00:25:02.226 [2024-10-13 17:54:51.975892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.226 [2024-10-13 17:54:52.014457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.226 [2024-10-13 17:54:52.014523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:02.226 [2024-10-13 17:54:52.014537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.516 ms 00:25:02.226 [2024-10-13 17:54:52.014546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.226 [2024-10-13 17:54:52.014662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.226 [2024-10-13 17:54:52.014794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:02.226 [2024-10-13 17:54:52.014804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:02.226 [2024-10-13 17:54:52.014813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.488 [2024-10-13 17:54:52.068362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.488 [2024-10-13 17:54:52.068426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:02.488 [2024-10-13 17:54:52.068440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.483 ms 00:25:02.488 [2024-10-13 17:54:52.068450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.488 [2024-10-13 17:54:52.068506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.488 [2024-10-13 17:54:52.068518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:02.488 [2024-10-13 17:54:52.068529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:02.488 [2024-10-13 17:54:52.068537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.488 [2024-10-13 17:54:52.069312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.488 [2024-10-13 17:54:52.069361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:02.488 [2024-10-13 17:54:52.069373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.666 ms 00:25:02.488 [2024-10-13 17:54:52.069382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.488 [2024-10-13 17:54:52.069585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.488 [2024-10-13 17:54:52.069598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:02.488 [2024-10-13 17:54:52.069608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:25:02.488 [2024-10-13 17:54:52.069616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.488 [2024-10-13 17:54:52.088187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.488 [2024-10-13 17:54:52.088236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:02.488 [2024-10-13 17:54:52.088249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.546 ms 00:25:02.488 [2024-10-13 17:54:52.088261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.488 [2024-10-13 17:54:52.103788] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:02.488 [2024-10-13 17:54:52.103846] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:02.488 [2024-10-13 17:54:52.103862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.488 [2024-10-13 17:54:52.103873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:02.488 [2024-10-13 17:54:52.103884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.480 ms 00:25:02.488 [2024-10-13 17:54:52.103892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.488 [2024-10-13 17:54:52.130198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.488 [2024-10-13 17:54:52.130257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:02.488 [2024-10-13 17:54:52.130280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.249 ms 00:25:02.488 [2024-10-13 17:54:52.130289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.488 [2024-10-13 17:54:52.143629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.488 [2024-10-13 17:54:52.143694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:02.488 [2024-10-13 17:54:52.143707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.275 ms 00:25:02.488 [2024-10-13 17:54:52.143715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.488 [2024-10-13 17:54:52.156374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.488 [2024-10-13 17:54:52.156425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:02.488 [2024-10-13 17:54:52.156437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.599 ms 00:25:02.488 [2024-10-13 17:54:52.156445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.488 [2024-10-13 17:54:52.157124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.488 [2024-10-13 17:54:52.157154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:02.488 [2024-10-13 17:54:52.157165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:25:02.488 [2024-10-13 17:54:52.157175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.488 [2024-10-13 17:54:52.230552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.488 [2024-10-13 17:54:52.230635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:02.488 [2024-10-13 17:54:52.230652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.352 ms 00:25:02.488 [2024-10-13 17:54:52.230671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.488 [2024-10-13 17:54:52.243103] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:02.488 [2024-10-13 17:54:52.247527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.488 [2024-10-13 17:54:52.247589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:02.488 [2024-10-13 17:54:52.247603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.789 ms 00:25:02.488 [2024-10-13 17:54:52.247612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.489 [2024-10-13 17:54:52.247716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.489 [2024-10-13 17:54:52.247755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:02.489 [2024-10-13 17:54:52.247767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:25:02.489 [2024-10-13 17:54:52.247777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.489 [2024-10-13 17:54:52.249907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.489 [2024-10-13 17:54:52.249956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:02.489 [2024-10-13 17:54:52.249970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.082 ms 00:25:02.489 [2024-10-13 17:54:52.249979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.489 [2024-10-13 17:54:52.250021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.489 [2024-10-13 17:54:52.250031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:02.489 [2024-10-13 17:54:52.250041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:02.489 [2024-10-13 17:54:52.250051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.489 [2024-10-13 17:54:52.250098] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:02.489 [2024-10-13 17:54:52.250111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.489 [2024-10-13 17:54:52.250124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:02.489 [2024-10-13 17:54:52.250134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:02.489 [2024-10-13 17:54:52.250144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.489 [2024-10-13 17:54:52.277228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.489 [2024-10-13 17:54:52.277286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:02.489 [2024-10-13 17:54:52.277301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.062 ms 00:25:02.489 [2024-10-13 17:54:52.277311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.489 [2024-10-13 17:54:52.277414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.489 [2024-10-13 17:54:52.277426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:02.489 [2024-10-13 17:54:52.277437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:25:02.489 [2024-10-13 17:54:52.277446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.489 [2024-10-13 17:54:52.279019] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 340.021 ms, result 0 00:25:03.876  [2024-10-13T17:54:54.634Z] Copying: 1136/1048576 [kB] (1136 kBps) [2024-10-13T17:54:55.579Z] Copying: 4140/1048576 [kB] (3004 kBps) [2024-10-13T17:54:56.524Z] Copying: 15/1024 [MB] (10 MBps) [2024-10-13T17:54:57.469Z] Copying: 31/1024 [MB] (16 MBps) [2024-10-13T17:54:58.856Z] Copying: 64/1024 [MB] (33 MBps) [2024-10-13T17:54:59.799Z] Copying: 106/1024 [MB] (42 MBps) [2024-10-13T17:55:00.743Z] Copying: 149/1024 [MB] (42 MBps) [2024-10-13T17:55:01.687Z] Copying: 190/1024 [MB] (41 MBps) [2024-10-13T17:55:02.667Z] Copying: 216/1024 [MB] (25 MBps) [2024-10-13T17:55:03.608Z] Copying: 232/1024 [MB] (16 MBps) [2024-10-13T17:55:04.550Z] Copying: 259/1024 [MB] (27 MBps) [2024-10-13T17:55:05.490Z] Copying: 276/1024 [MB] (16 MBps) [2024-10-13T17:55:06.876Z] Copying: 301/1024 [MB] (25 MBps) [2024-10-13T17:55:07.820Z] Copying: 344/1024 [MB] (42 MBps) [2024-10-13T17:55:08.763Z] Copying: 387/1024 [MB] (42 MBps) [2024-10-13T17:55:09.710Z] Copying: 413/1024 [MB] (26 MBps) [2024-10-13T17:55:10.655Z] Copying: 446/1024 [MB] (32 MBps) [2024-10-13T17:55:11.599Z] Copying: 468/1024 [MB] (21 MBps) [2024-10-13T17:55:12.542Z] Copying: 492/1024 [MB] (24 MBps) [2024-10-13T17:55:13.486Z] Copying: 516/1024 [MB] (24 MBps) [2024-10-13T17:55:14.874Z] Copying: 540/1024 [MB] (23 MBps) [2024-10-13T17:55:15.818Z] Copying: 569/1024 [MB] (28 MBps) [2024-10-13T17:55:16.761Z] Copying: 592/1024 [MB] (23 MBps) [2024-10-13T17:55:17.707Z] Copying: 618/1024 [MB] (25 MBps) [2024-10-13T17:55:18.650Z] Copying: 654/1024 [MB] (35 MBps) [2024-10-13T17:55:19.637Z] Copying: 689/1024 [MB] (35 MBps) [2024-10-13T17:55:20.581Z] Copying: 705/1024 [MB] (15 MBps) [2024-10-13T17:55:21.524Z] Copying: 733/1024 [MB] (28 MBps) [2024-10-13T17:55:22.910Z] Copying: 758/1024 [MB] (24 MBps) [2024-10-13T17:55:23.482Z] Copying: 786/1024 [MB] (27 MBps) [2024-10-13T17:55:24.871Z] Copying: 811/1024 [MB] (25 MBps) [2024-10-13T17:55:25.816Z] Copying: 835/1024 [MB] (23 MBps) [2024-10-13T17:55:26.759Z] Copying: 860/1024 [MB] (25 MBps) [2024-10-13T17:55:27.704Z] Copying: 882/1024 [MB] (21 MBps) [2024-10-13T17:55:28.647Z] Copying: 918/1024 [MB] (35 MBps) [2024-10-13T17:55:29.590Z] Copying: 946/1024 [MB] (27 MBps) [2024-10-13T17:55:30.533Z] Copying: 972/1024 [MB] (25 MBps) [2024-10-13T17:55:31.478Z] Copying: 1006/1024 [MB] (34 MBps) [2024-10-13T17:55:31.478Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-10-13 17:55:31.390553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.664 [2024-10-13 17:55:31.390656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:41.664 [2024-10-13 17:55:31.390680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:41.664 [2024-10-13 17:55:31.390700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.664 [2024-10-13 17:55:31.390734] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:41.664 [2024-10-13 17:55:31.395421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.664 [2024-10-13 17:55:31.395470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:41.664 [2024-10-13 17:55:31.395487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.661 ms 00:25:41.664 [2024-10-13 17:55:31.395500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.664 [2024-10-13 17:55:31.395895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.664 [2024-10-13 17:55:31.395922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:41.664 [2024-10-13 17:55:31.395936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:25:41.664 [2024-10-13 17:55:31.395956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.664 [2024-10-13 17:55:31.405804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.664 [2024-10-13 17:55:31.405842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:41.664 [2024-10-13 17:55:31.405852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.825 ms 00:25:41.664 [2024-10-13 17:55:31.405858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.664 [2024-10-13 17:55:31.410642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.664 [2024-10-13 17:55:31.410673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:41.664 [2024-10-13 17:55:31.410682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.759 ms 00:25:41.664 [2024-10-13 17:55:31.410689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.664 [2024-10-13 17:55:31.430001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.664 [2024-10-13 17:55:31.430031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:41.664 [2024-10-13 17:55:31.430041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.255 ms 00:25:41.664 [2024-10-13 17:55:31.430047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.664 [2024-10-13 17:55:31.441867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.664 [2024-10-13 17:55:31.441894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:41.664 [2024-10-13 17:55:31.441903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.789 ms 00:25:41.664 [2024-10-13 17:55:31.441910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.664 [2024-10-13 17:55:31.444113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.664 [2024-10-13 17:55:31.444152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:41.664 [2024-10-13 17:55:31.444160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.170 ms 00:25:41.664 [2024-10-13 17:55:31.444166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.664 [2024-10-13 17:55:31.462368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.664 [2024-10-13 17:55:31.462394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:41.664 [2024-10-13 17:55:31.462402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.190 ms 00:25:41.664 [2024-10-13 17:55:31.462408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.926 [2024-10-13 17:55:31.480215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.926 [2024-10-13 17:55:31.480241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:41.927 [2024-10-13 17:55:31.480257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.781 ms 00:25:41.927 [2024-10-13 17:55:31.480263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.927 [2024-10-13 17:55:31.497505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.927 [2024-10-13 17:55:31.497532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:41.927 [2024-10-13 17:55:31.497540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.215 ms 00:25:41.927 [2024-10-13 17:55:31.497546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.927 [2024-10-13 17:55:31.515048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.927 [2024-10-13 17:55:31.515075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:41.927 [2024-10-13 17:55:31.515083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.447 ms 00:25:41.927 [2024-10-13 17:55:31.515088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.927 [2024-10-13 17:55:31.515115] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:41.927 [2024-10-13 17:55:31.515127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:41.927 [2024-10-13 17:55:31.515136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:25:41.927 [2024-10-13 17:55:31.515143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:41.927 [2024-10-13 17:55:31.515657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:41.928 [2024-10-13 17:55:31.515664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:41.928 [2024-10-13 17:55:31.515670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:41.928 [2024-10-13 17:55:31.515678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:41.928 [2024-10-13 17:55:31.515684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:41.928 [2024-10-13 17:55:31.515690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:41.928 [2024-10-13 17:55:31.515709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:41.928 [2024-10-13 17:55:31.515716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:41.928 [2024-10-13 17:55:31.515722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:41.928 [2024-10-13 17:55:31.515730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:41.928 [2024-10-13 17:55:31.515736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:41.928 [2024-10-13 17:55:31.515743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:41.928 [2024-10-13 17:55:31.515749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:41.928 [2024-10-13 17:55:31.515755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:41.928 [2024-10-13 17:55:31.515762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:41.928 [2024-10-13 17:55:31.515768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:41.928 [2024-10-13 17:55:31.515781] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:41.928 [2024-10-13 17:55:31.515787] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4cf00a7e-b346-4e16-a62c-1327e6d2445c 00:25:41.928 [2024-10-13 17:55:31.515794] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:25:41.928 [2024-10-13 17:55:31.515800] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 165056 00:25:41.928 [2024-10-13 17:55:31.515806] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 163072 00:25:41.928 [2024-10-13 17:55:31.515813] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0122 00:25:41.928 [2024-10-13 17:55:31.515819] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:41.928 [2024-10-13 17:55:31.515828] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:41.928 [2024-10-13 17:55:31.515834] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:41.928 [2024-10-13 17:55:31.515844] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:41.928 [2024-10-13 17:55:31.515849] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:41.928 [2024-10-13 17:55:31.515855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.928 [2024-10-13 17:55:31.515861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:41.928 [2024-10-13 17:55:31.515867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.740 ms 00:25:41.928 [2024-10-13 17:55:31.515873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.928 [2024-10-13 17:55:31.525940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.928 [2024-10-13 17:55:31.525966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:41.928 [2024-10-13 17:55:31.525979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.054 ms 00:25:41.928 [2024-10-13 17:55:31.525985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.928 [2024-10-13 17:55:31.526275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.928 [2024-10-13 17:55:31.526289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:41.928 [2024-10-13 17:55:31.526296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:25:41.928 [2024-10-13 17:55:31.526302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.928 [2024-10-13 17:55:31.553754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.928 [2024-10-13 17:55:31.553788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:41.928 [2024-10-13 17:55:31.553797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.928 [2024-10-13 17:55:31.553803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.928 [2024-10-13 17:55:31.553849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.928 [2024-10-13 17:55:31.553855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:41.928 [2024-10-13 17:55:31.553862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.928 [2024-10-13 17:55:31.553868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.928 [2024-10-13 17:55:31.553911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.928 [2024-10-13 17:55:31.553919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:41.928 [2024-10-13 17:55:31.553928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.928 [2024-10-13 17:55:31.553935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.928 [2024-10-13 17:55:31.553948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.928 [2024-10-13 17:55:31.553954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:41.928 [2024-10-13 17:55:31.553961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.928 [2024-10-13 17:55:31.553967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.928 [2024-10-13 17:55:31.617973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.928 [2024-10-13 17:55:31.618014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:41.928 [2024-10-13 17:55:31.618023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.928 [2024-10-13 17:55:31.618029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.928 [2024-10-13 17:55:31.670000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.928 [2024-10-13 17:55:31.670038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:41.928 [2024-10-13 17:55:31.670048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.928 [2024-10-13 17:55:31.670055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.928 [2024-10-13 17:55:31.670118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.928 [2024-10-13 17:55:31.670126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:41.928 [2024-10-13 17:55:31.670133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.928 [2024-10-13 17:55:31.670143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.928 [2024-10-13 17:55:31.670174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.928 [2024-10-13 17:55:31.670182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:41.928 [2024-10-13 17:55:31.670188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.928 [2024-10-13 17:55:31.670195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.928 [2024-10-13 17:55:31.670268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.928 [2024-10-13 17:55:31.670277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:41.928 [2024-10-13 17:55:31.670282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.928 [2024-10-13 17:55:31.670289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.928 [2024-10-13 17:55:31.670316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.928 [2024-10-13 17:55:31.670324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:41.928 [2024-10-13 17:55:31.670330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.928 [2024-10-13 17:55:31.670336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.928 [2024-10-13 17:55:31.670371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.928 [2024-10-13 17:55:31.670405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:41.928 [2024-10-13 17:55:31.670412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.928 [2024-10-13 17:55:31.670419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.928 [2024-10-13 17:55:31.670459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.928 [2024-10-13 17:55:31.670472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:41.928 [2024-10-13 17:55:31.670478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.928 [2024-10-13 17:55:31.670484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.928 [2024-10-13 17:55:31.670599] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 280.024 ms, result 0 00:25:42.499 00:25:42.500 00:25:42.500 17:55:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:45.048 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:45.048 17:55:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:45.048 [2024-10-13 17:55:34.523469] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:25:45.048 [2024-10-13 17:55:34.523580] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80119 ] 00:25:45.048 [2024-10-13 17:55:34.664939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.048 [2024-10-13 17:55:34.758140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.309 [2024-10-13 17:55:34.988194] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:45.309 [2024-10-13 17:55:34.988250] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:45.571 [2024-10-13 17:55:35.141534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.571 [2024-10-13 17:55:35.141585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:45.571 [2024-10-13 17:55:35.141597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:45.571 [2024-10-13 17:55:35.141608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.571 [2024-10-13 17:55:35.141644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.571 [2024-10-13 17:55:35.141653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:45.571 [2024-10-13 17:55:35.141659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:25:45.571 [2024-10-13 17:55:35.141667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.571 [2024-10-13 17:55:35.141681] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:45.571 [2024-10-13 17:55:35.142184] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:45.571 [2024-10-13 17:55:35.142201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.571 [2024-10-13 17:55:35.142210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:45.571 [2024-10-13 17:55:35.142217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:25:45.571 [2024-10-13 17:55:35.142224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.571 [2024-10-13 17:55:35.143507] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:45.571 [2024-10-13 17:55:35.153748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.571 [2024-10-13 17:55:35.153782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:45.571 [2024-10-13 17:55:35.153792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.242 ms 00:25:45.571 [2024-10-13 17:55:35.153800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.571 [2024-10-13 17:55:35.153846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.571 [2024-10-13 17:55:35.153854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:45.571 [2024-10-13 17:55:35.153863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:45.571 [2024-10-13 17:55:35.153869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.571 [2024-10-13 17:55:35.160212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.571 [2024-10-13 17:55:35.160242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:45.571 [2024-10-13 17:55:35.160249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.296 ms 00:25:45.571 [2024-10-13 17:55:35.160255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.571 [2024-10-13 17:55:35.160317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.571 [2024-10-13 17:55:35.160325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:45.571 [2024-10-13 17:55:35.160332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:25:45.571 [2024-10-13 17:55:35.160337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.571 [2024-10-13 17:55:35.160370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.571 [2024-10-13 17:55:35.160377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:45.571 [2024-10-13 17:55:35.160384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:45.571 [2024-10-13 17:55:35.160390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.571 [2024-10-13 17:55:35.160404] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:45.571 [2024-10-13 17:55:35.163360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.571 [2024-10-13 17:55:35.163385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:45.571 [2024-10-13 17:55:35.163393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.960 ms 00:25:45.571 [2024-10-13 17:55:35.163399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.571 [2024-10-13 17:55:35.163429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.571 [2024-10-13 17:55:35.163436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:45.571 [2024-10-13 17:55:35.163442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:45.571 [2024-10-13 17:55:35.163449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.571 [2024-10-13 17:55:35.163463] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:45.571 [2024-10-13 17:55:35.163479] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:45.571 [2024-10-13 17:55:35.163508] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:45.571 [2024-10-13 17:55:35.163521] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:45.571 [2024-10-13 17:55:35.163612] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:45.571 [2024-10-13 17:55:35.163622] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:45.571 [2024-10-13 17:55:35.163631] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:45.571 [2024-10-13 17:55:35.163639] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:45.571 [2024-10-13 17:55:35.163647] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:45.571 [2024-10-13 17:55:35.163653] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:45.571 [2024-10-13 17:55:35.163659] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:45.571 [2024-10-13 17:55:35.163666] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:45.571 [2024-10-13 17:55:35.163672] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:45.571 [2024-10-13 17:55:35.163678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.571 [2024-10-13 17:55:35.163686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:45.571 [2024-10-13 17:55:35.163691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:25:45.571 [2024-10-13 17:55:35.163704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.571 [2024-10-13 17:55:35.163768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.571 [2024-10-13 17:55:35.163774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:45.571 [2024-10-13 17:55:35.163780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:45.571 [2024-10-13 17:55:35.163787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.571 [2024-10-13 17:55:35.163864] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:45.571 [2024-10-13 17:55:35.163882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:45.571 [2024-10-13 17:55:35.163891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:45.571 [2024-10-13 17:55:35.163897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:45.571 [2024-10-13 17:55:35.163904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:45.571 [2024-10-13 17:55:35.163910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:45.571 [2024-10-13 17:55:35.163915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:45.571 [2024-10-13 17:55:35.163921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:45.571 [2024-10-13 17:55:35.163926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:45.571 [2024-10-13 17:55:35.163932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:45.571 [2024-10-13 17:55:35.163937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:45.571 [2024-10-13 17:55:35.163943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:45.571 [2024-10-13 17:55:35.163948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:45.571 [2024-10-13 17:55:35.163953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:45.571 [2024-10-13 17:55:35.163958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:45.571 [2024-10-13 17:55:35.163969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:45.571 [2024-10-13 17:55:35.163975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:45.571 [2024-10-13 17:55:35.163980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:45.571 [2024-10-13 17:55:35.163985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:45.571 [2024-10-13 17:55:35.163990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:45.571 [2024-10-13 17:55:35.163996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:45.571 [2024-10-13 17:55:35.164001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:45.571 [2024-10-13 17:55:35.164006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:45.571 [2024-10-13 17:55:35.164011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:45.571 [2024-10-13 17:55:35.164016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:45.571 [2024-10-13 17:55:35.164021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:45.571 [2024-10-13 17:55:35.164026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:45.571 [2024-10-13 17:55:35.164031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:45.571 [2024-10-13 17:55:35.164036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:45.571 [2024-10-13 17:55:35.164041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:45.571 [2024-10-13 17:55:35.164046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:45.571 [2024-10-13 17:55:35.164051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:45.571 [2024-10-13 17:55:35.164057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:45.571 [2024-10-13 17:55:35.164062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:45.571 [2024-10-13 17:55:35.164067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:45.571 [2024-10-13 17:55:35.164072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:45.571 [2024-10-13 17:55:35.164077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:45.571 [2024-10-13 17:55:35.164083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:45.571 [2024-10-13 17:55:35.164088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:45.571 [2024-10-13 17:55:35.164093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:45.571 [2024-10-13 17:55:35.164098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:45.572 [2024-10-13 17:55:35.164103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:45.572 [2024-10-13 17:55:35.164108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:45.572 [2024-10-13 17:55:35.164113] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:45.572 [2024-10-13 17:55:35.164119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:45.572 [2024-10-13 17:55:35.164125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:45.572 [2024-10-13 17:55:35.164130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:45.572 [2024-10-13 17:55:35.164138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:45.572 [2024-10-13 17:55:35.164143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:45.572 [2024-10-13 17:55:35.164148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:45.572 [2024-10-13 17:55:35.164154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:45.572 [2024-10-13 17:55:35.164159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:45.572 [2024-10-13 17:55:35.164164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:45.572 [2024-10-13 17:55:35.164171] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:45.572 [2024-10-13 17:55:35.164178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:45.572 [2024-10-13 17:55:35.164184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:45.572 [2024-10-13 17:55:35.164190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:45.572 [2024-10-13 17:55:35.164195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:45.572 [2024-10-13 17:55:35.164201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:45.572 [2024-10-13 17:55:35.164206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:45.572 [2024-10-13 17:55:35.164211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:45.572 [2024-10-13 17:55:35.164217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:45.572 [2024-10-13 17:55:35.164223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:45.572 [2024-10-13 17:55:35.164228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:45.572 [2024-10-13 17:55:35.164233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:45.572 [2024-10-13 17:55:35.164239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:45.572 [2024-10-13 17:55:35.164244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:45.572 [2024-10-13 17:55:35.164249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:45.572 [2024-10-13 17:55:35.164255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:45.572 [2024-10-13 17:55:35.164261] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:45.572 [2024-10-13 17:55:35.164267] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:45.572 [2024-10-13 17:55:35.164275] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:45.572 [2024-10-13 17:55:35.164281] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:45.572 [2024-10-13 17:55:35.164286] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:45.572 [2024-10-13 17:55:35.164292] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:45.572 [2024-10-13 17:55:35.164298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.572 [2024-10-13 17:55:35.164304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:45.572 [2024-10-13 17:55:35.164309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.486 ms 00:25:45.572 [2024-10-13 17:55:35.164315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.572 [2024-10-13 17:55:35.188905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.572 [2024-10-13 17:55:35.188937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:45.572 [2024-10-13 17:55:35.188945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.544 ms 00:25:45.572 [2024-10-13 17:55:35.188951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.572 [2024-10-13 17:55:35.189014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.572 [2024-10-13 17:55:35.189024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:45.572 [2024-10-13 17:55:35.189030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:25:45.572 [2024-10-13 17:55:35.189036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.572 [2024-10-13 17:55:35.236052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.572 [2024-10-13 17:55:35.236086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:45.572 [2024-10-13 17:55:35.236096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.977 ms 00:25:45.572 [2024-10-13 17:55:35.236104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.572 [2024-10-13 17:55:35.236137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.572 [2024-10-13 17:55:35.236145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:45.572 [2024-10-13 17:55:35.236152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:25:45.572 [2024-10-13 17:55:35.236158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.572 [2024-10-13 17:55:35.236605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.572 [2024-10-13 17:55:35.236625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:45.572 [2024-10-13 17:55:35.236633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.389 ms 00:25:45.572 [2024-10-13 17:55:35.236640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.572 [2024-10-13 17:55:35.236752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.572 [2024-10-13 17:55:35.236766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:45.572 [2024-10-13 17:55:35.236774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:25:45.572 [2024-10-13 17:55:35.236780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.572 [2024-10-13 17:55:35.248672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.572 [2024-10-13 17:55:35.248698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:45.572 [2024-10-13 17:55:35.248707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.872 ms 00:25:45.572 [2024-10-13 17:55:35.248716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.572 [2024-10-13 17:55:35.258885] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:45.572 [2024-10-13 17:55:35.258917] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:45.572 [2024-10-13 17:55:35.258926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.572 [2024-10-13 17:55:35.258933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:45.572 [2024-10-13 17:55:35.258941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.134 ms 00:25:45.572 [2024-10-13 17:55:35.258947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.572 [2024-10-13 17:55:35.277973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.572 [2024-10-13 17:55:35.278004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:45.572 [2024-10-13 17:55:35.278018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.994 ms 00:25:45.572 [2024-10-13 17:55:35.278024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.572 [2024-10-13 17:55:35.287062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.572 [2024-10-13 17:55:35.287094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:45.572 [2024-10-13 17:55:35.287101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.004 ms 00:25:45.572 [2024-10-13 17:55:35.287107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.572 [2024-10-13 17:55:35.295960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.572 [2024-10-13 17:55:35.295987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:45.572 [2024-10-13 17:55:35.295995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.826 ms 00:25:45.572 [2024-10-13 17:55:35.296000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.572 [2024-10-13 17:55:35.296461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.572 [2024-10-13 17:55:35.296481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:45.572 [2024-10-13 17:55:35.296488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:25:45.572 [2024-10-13 17:55:35.296494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.572 [2024-10-13 17:55:35.343919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.572 [2024-10-13 17:55:35.343960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:45.572 [2024-10-13 17:55:35.343971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.408 ms 00:25:45.572 [2024-10-13 17:55:35.343982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.572 [2024-10-13 17:55:35.352056] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:45.572 [2024-10-13 17:55:35.354276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.572 [2024-10-13 17:55:35.354303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:45.572 [2024-10-13 17:55:35.354313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.258 ms 00:25:45.572 [2024-10-13 17:55:35.354320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.572 [2024-10-13 17:55:35.354398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.572 [2024-10-13 17:55:35.354408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:45.572 [2024-10-13 17:55:35.354415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:45.572 [2024-10-13 17:55:35.354421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.572 [2024-10-13 17:55:35.355077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.572 [2024-10-13 17:55:35.355105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:45.572 [2024-10-13 17:55:35.355113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.624 ms 00:25:45.572 [2024-10-13 17:55:35.355119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.572 [2024-10-13 17:55:35.355141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.572 [2024-10-13 17:55:35.355149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:45.572 [2024-10-13 17:55:35.355156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:45.573 [2024-10-13 17:55:35.355162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.573 [2024-10-13 17:55:35.355192] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:45.573 [2024-10-13 17:55:35.355200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.573 [2024-10-13 17:55:35.355209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:45.573 [2024-10-13 17:55:35.355215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:45.573 [2024-10-13 17:55:35.355221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.573 [2024-10-13 17:55:35.373966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.573 [2024-10-13 17:55:35.373996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:45.573 [2024-10-13 17:55:35.374005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.731 ms 00:25:45.573 [2024-10-13 17:55:35.374011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.573 [2024-10-13 17:55:35.374073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.573 [2024-10-13 17:55:35.374081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:45.573 [2024-10-13 17:55:35.374088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:25:45.573 [2024-10-13 17:55:35.374094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.573 [2024-10-13 17:55:35.374949] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 233.038 ms, result 0 00:25:46.987  [2024-10-13T17:55:37.746Z] Copying: 23/1024 [MB] (23 MBps) [2024-10-13T17:55:38.685Z] Copying: 39/1024 [MB] (15 MBps) [2024-10-13T17:55:39.632Z] Copying: 57/1024 [MB] (17 MBps) [2024-10-13T17:55:40.574Z] Copying: 81/1024 [MB] (23 MBps) [2024-10-13T17:55:41.518Z] Copying: 107/1024 [MB] (25 MBps) [2024-10-13T17:55:42.905Z] Copying: 125/1024 [MB] (17 MBps) [2024-10-13T17:55:43.847Z] Copying: 145/1024 [MB] (20 MBps) [2024-10-13T17:55:44.792Z] Copying: 165/1024 [MB] (20 MBps) [2024-10-13T17:55:45.736Z] Copying: 185/1024 [MB] (19 MBps) [2024-10-13T17:55:46.678Z] Copying: 208/1024 [MB] (22 MBps) [2024-10-13T17:55:47.622Z] Copying: 229/1024 [MB] (20 MBps) [2024-10-13T17:55:48.565Z] Copying: 247/1024 [MB] (18 MBps) [2024-10-13T17:55:49.953Z] Copying: 260/1024 [MB] (13 MBps) [2024-10-13T17:55:50.526Z] Copying: 282/1024 [MB] (21 MBps) [2024-10-13T17:55:51.915Z] Copying: 300/1024 [MB] (17 MBps) [2024-10-13T17:55:52.858Z] Copying: 315/1024 [MB] (15 MBps) [2024-10-13T17:55:53.804Z] Copying: 333/1024 [MB] (17 MBps) [2024-10-13T17:55:54.792Z] Copying: 349/1024 [MB] (16 MBps) [2024-10-13T17:55:55.736Z] Copying: 368/1024 [MB] (18 MBps) [2024-10-13T17:55:56.681Z] Copying: 386/1024 [MB] (18 MBps) [2024-10-13T17:55:57.626Z] Copying: 399/1024 [MB] (12 MBps) [2024-10-13T17:55:58.569Z] Copying: 420/1024 [MB] (20 MBps) [2024-10-13T17:55:59.513Z] Copying: 437/1024 [MB] (16 MBps) [2024-10-13T17:56:00.901Z] Copying: 454/1024 [MB] (17 MBps) [2024-10-13T17:56:01.845Z] Copying: 477/1024 [MB] (23 MBps) [2024-10-13T17:56:02.791Z] Copying: 498/1024 [MB] (21 MBps) [2024-10-13T17:56:03.736Z] Copying: 515/1024 [MB] (16 MBps) [2024-10-13T17:56:04.681Z] Copying: 529/1024 [MB] (14 MBps) [2024-10-13T17:56:05.625Z] Copying: 542/1024 [MB] (12 MBps) [2024-10-13T17:56:06.571Z] Copying: 556/1024 [MB] (14 MBps) [2024-10-13T17:56:07.516Z] Copying: 568/1024 [MB] (12 MBps) [2024-10-13T17:56:08.904Z] Copying: 584/1024 [MB] (16 MBps) [2024-10-13T17:56:09.850Z] Copying: 595/1024 [MB] (11 MBps) [2024-10-13T17:56:10.794Z] Copying: 607/1024 [MB] (12 MBps) [2024-10-13T17:56:11.770Z] Copying: 621/1024 [MB] (13 MBps) [2024-10-13T17:56:12.715Z] Copying: 636/1024 [MB] (15 MBps) [2024-10-13T17:56:13.658Z] Copying: 649/1024 [MB] (12 MBps) [2024-10-13T17:56:14.602Z] Copying: 666/1024 [MB] (16 MBps) [2024-10-13T17:56:15.546Z] Copying: 685/1024 [MB] (18 MBps) [2024-10-13T17:56:16.939Z] Copying: 703/1024 [MB] (17 MBps) [2024-10-13T17:56:17.513Z] Copying: 723/1024 [MB] (19 MBps) [2024-10-13T17:56:18.904Z] Copying: 740/1024 [MB] (17 MBps) [2024-10-13T17:56:19.850Z] Copying: 760/1024 [MB] (19 MBps) [2024-10-13T17:56:20.804Z] Copying: 781/1024 [MB] (20 MBps) [2024-10-13T17:56:21.749Z] Copying: 803/1024 [MB] (21 MBps) [2024-10-13T17:56:22.693Z] Copying: 822/1024 [MB] (18 MBps) [2024-10-13T17:56:23.637Z] Copying: 843/1024 [MB] (21 MBps) [2024-10-13T17:56:24.580Z] Copying: 856/1024 [MB] (13 MBps) [2024-10-13T17:56:25.525Z] Copying: 868/1024 [MB] (11 MBps) [2024-10-13T17:56:26.912Z] Copying: 889/1024 [MB] (21 MBps) [2024-10-13T17:56:27.856Z] Copying: 912/1024 [MB] (22 MBps) [2024-10-13T17:56:28.828Z] Copying: 928/1024 [MB] (15 MBps) [2024-10-13T17:56:29.799Z] Copying: 944/1024 [MB] (15 MBps) [2024-10-13T17:56:30.740Z] Copying: 963/1024 [MB] (19 MBps) [2024-10-13T17:56:31.682Z] Copying: 983/1024 [MB] (19 MBps) [2024-10-13T17:56:32.627Z] Copying: 1002/1024 [MB] (19 MBps) [2024-10-13T17:56:32.889Z] Copying: 1019/1024 [MB] (16 MBps) [2024-10-13T17:56:33.464Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-10-13 17:56:33.245479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.650 [2024-10-13 17:56:33.245610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:43.650 [2024-10-13 17:56:33.245631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:43.650 [2024-10-13 17:56:33.245640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.650 [2024-10-13 17:56:33.245669] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:43.650 [2024-10-13 17:56:33.249366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.650 [2024-10-13 17:56:33.249408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:43.650 [2024-10-13 17:56:33.249421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.677 ms 00:26:43.650 [2024-10-13 17:56:33.249432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.650 [2024-10-13 17:56:33.249718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.650 [2024-10-13 17:56:33.249732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:43.650 [2024-10-13 17:56:33.249742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:26:43.650 [2024-10-13 17:56:33.249751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.650 [2024-10-13 17:56:33.253601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.650 [2024-10-13 17:56:33.253627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:43.650 [2024-10-13 17:56:33.253638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.835 ms 00:26:43.650 [2024-10-13 17:56:33.253647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.650 [2024-10-13 17:56:33.260871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.650 [2024-10-13 17:56:33.260923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:43.650 [2024-10-13 17:56:33.260936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.200 ms 00:26:43.650 [2024-10-13 17:56:33.260945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.650 [2024-10-13 17:56:33.291202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.650 [2024-10-13 17:56:33.291248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:43.650 [2024-10-13 17:56:33.291263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.177 ms 00:26:43.650 [2024-10-13 17:56:33.291272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.650 [2024-10-13 17:56:33.309020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.650 [2024-10-13 17:56:33.309064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:43.650 [2024-10-13 17:56:33.309078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.695 ms 00:26:43.650 [2024-10-13 17:56:33.309088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.650 [2024-10-13 17:56:33.314340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.650 [2024-10-13 17:56:33.314380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:43.650 [2024-10-13 17:56:33.314401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.198 ms 00:26:43.650 [2024-10-13 17:56:33.314411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.650 [2024-10-13 17:56:33.340516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.650 [2024-10-13 17:56:33.340565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:43.650 [2024-10-13 17:56:33.340579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.088 ms 00:26:43.650 [2024-10-13 17:56:33.340588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.650 [2024-10-13 17:56:33.366133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.650 [2024-10-13 17:56:33.366187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:43.650 [2024-10-13 17:56:33.366199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.498 ms 00:26:43.650 [2024-10-13 17:56:33.366208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.650 [2024-10-13 17:56:33.391084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.650 [2024-10-13 17:56:33.391121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:43.650 [2024-10-13 17:56:33.391133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.830 ms 00:26:43.650 [2024-10-13 17:56:33.391141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.650 [2024-10-13 17:56:33.416046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.650 [2024-10-13 17:56:33.416084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:43.650 [2024-10-13 17:56:33.416096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.831 ms 00:26:43.650 [2024-10-13 17:56:33.416104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.650 [2024-10-13 17:56:33.416150] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:43.650 [2024-10-13 17:56:33.416168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:43.650 [2024-10-13 17:56:33.416181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:26:43.650 [2024-10-13 17:56:33.416191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:43.650 [2024-10-13 17:56:33.416199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:43.650 [2024-10-13 17:56:33.416208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:43.650 [2024-10-13 17:56:33.416217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:43.650 [2024-10-13 17:56:33.416226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:43.650 [2024-10-13 17:56:33.416237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:43.650 [2024-10-13 17:56:33.416246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:43.650 [2024-10-13 17:56:33.416259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:43.650 [2024-10-13 17:56:33.416267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:43.650 [2024-10-13 17:56:33.416276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:43.650 [2024-10-13 17:56:33.416284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:43.650 [2024-10-13 17:56:33.416292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:43.650 [2024-10-13 17:56:33.416300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:43.650 [2024-10-13 17:56:33.416308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:43.650 [2024-10-13 17:56:33.416318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:43.650 [2024-10-13 17:56:33.416327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:43.650 [2024-10-13 17:56:33.416334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:43.650 [2024-10-13 17:56:33.416342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:43.650 [2024-10-13 17:56:33.416350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:43.650 [2024-10-13 17:56:33.416358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:43.650 [2024-10-13 17:56:33.416365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:43.650 [2024-10-13 17:56:33.416373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:43.650 [2024-10-13 17:56:33.416380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:43.650 [2024-10-13 17:56:33.416388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:43.650 [2024-10-13 17:56:33.416396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:43.650 [2024-10-13 17:56:33.416403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:43.650 [2024-10-13 17:56:33.416410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.416992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.417003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.417012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.417023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.417030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.417040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:43.651 [2024-10-13 17:56:33.417057] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:43.651 [2024-10-13 17:56:33.417074] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4cf00a7e-b346-4e16-a62c-1327e6d2445c 00:26:43.651 [2024-10-13 17:56:33.417085] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:26:43.651 [2024-10-13 17:56:33.417098] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:43.651 [2024-10-13 17:56:33.417107] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:43.651 [2024-10-13 17:56:33.417116] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:43.651 [2024-10-13 17:56:33.417125] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:43.651 [2024-10-13 17:56:33.417134] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:43.651 [2024-10-13 17:56:33.417149] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:43.651 [2024-10-13 17:56:33.417156] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:43.651 [2024-10-13 17:56:33.417163] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:43.651 [2024-10-13 17:56:33.417170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.651 [2024-10-13 17:56:33.417182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:43.651 [2024-10-13 17:56:33.417192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.023 ms 00:26:43.651 [2024-10-13 17:56:33.417201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.651 [2024-10-13 17:56:33.432229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.651 [2024-10-13 17:56:33.432265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:43.651 [2024-10-13 17:56:33.432279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.008 ms 00:26:43.651 [2024-10-13 17:56:33.432288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.651 [2024-10-13 17:56:33.432750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.651 [2024-10-13 17:56:33.432837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:43.651 [2024-10-13 17:56:33.432849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:26:43.651 [2024-10-13 17:56:33.432865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.913 [2024-10-13 17:56:33.472299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.913 [2024-10-13 17:56:33.472340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:43.913 [2024-10-13 17:56:33.472353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.914 [2024-10-13 17:56:33.472362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.914 [2024-10-13 17:56:33.472426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.914 [2024-10-13 17:56:33.472436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:43.914 [2024-10-13 17:56:33.472445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.914 [2024-10-13 17:56:33.472458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.914 [2024-10-13 17:56:33.472545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.914 [2024-10-13 17:56:33.472577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:43.914 [2024-10-13 17:56:33.472588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.914 [2024-10-13 17:56:33.472596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.914 [2024-10-13 17:56:33.472614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.914 [2024-10-13 17:56:33.472624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:43.914 [2024-10-13 17:56:33.472634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.914 [2024-10-13 17:56:33.472642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.914 [2024-10-13 17:56:33.565066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.914 [2024-10-13 17:56:33.565116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:43.914 [2024-10-13 17:56:33.565130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.914 [2024-10-13 17:56:33.565139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.914 [2024-10-13 17:56:33.640299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.914 [2024-10-13 17:56:33.640363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:43.914 [2024-10-13 17:56:33.640377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.914 [2024-10-13 17:56:33.640388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.914 [2024-10-13 17:56:33.640470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.914 [2024-10-13 17:56:33.640481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:43.914 [2024-10-13 17:56:33.640491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.914 [2024-10-13 17:56:33.640500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.914 [2024-10-13 17:56:33.640598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.914 [2024-10-13 17:56:33.640612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:43.914 [2024-10-13 17:56:33.640622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.914 [2024-10-13 17:56:33.640631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.914 [2024-10-13 17:56:33.640749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.914 [2024-10-13 17:56:33.640761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:43.914 [2024-10-13 17:56:33.640772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.914 [2024-10-13 17:56:33.640781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.914 [2024-10-13 17:56:33.640816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.914 [2024-10-13 17:56:33.640828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:43.914 [2024-10-13 17:56:33.640837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.914 [2024-10-13 17:56:33.640845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.914 [2024-10-13 17:56:33.640900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.914 [2024-10-13 17:56:33.640927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:43.914 [2024-10-13 17:56:33.640937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.914 [2024-10-13 17:56:33.640947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.914 [2024-10-13 17:56:33.641005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.914 [2024-10-13 17:56:33.641019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:43.914 [2024-10-13 17:56:33.641029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.914 [2024-10-13 17:56:33.641039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.914 [2024-10-13 17:56:33.641204] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 395.683 ms, result 0 00:26:44.858 00:26:44.858 00:26:44.859 17:56:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:47.407 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:26:47.407 17:56:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:26:47.407 17:56:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:26:47.407 17:56:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:47.407 17:56:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:47.407 17:56:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:47.407 17:56:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:47.407 17:56:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:47.407 17:56:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 78043 00:26:47.407 17:56:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 78043 ']' 00:26:47.407 Process with pid 78043 is not found 00:26:47.407 17:56:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 78043 00:26:47.407 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (78043) - No such process 00:26:47.407 17:56:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 78043 is not found' 00:26:47.407 17:56:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:26:47.407 17:56:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:26:47.407 Remove shared memory files 00:26:47.407 17:56:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:47.407 17:56:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:26:47.407 17:56:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:26:47.407 17:56:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:26:47.407 17:56:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:47.407 17:56:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:26:47.407 ************************************ 00:26:47.407 END TEST ftl_dirty_shutdown 00:26:47.407 ************************************ 00:26:47.407 00:26:47.407 real 4m19.399s 00:26:47.407 user 4m59.273s 00:26:47.407 sys 0m30.167s 00:26:47.407 17:56:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:47.407 17:56:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:47.407 17:56:37 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:47.407 17:56:37 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:26:47.407 17:56:37 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:47.407 17:56:37 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:47.407 ************************************ 00:26:47.407 START TEST ftl_upgrade_shutdown 00:26:47.407 ************************************ 00:26:47.407 17:56:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:47.668 * Looking for test storage... 00:26:47.668 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:47.668 17:56:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:47.668 17:56:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:26:47.668 17:56:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:47.668 17:56:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:47.668 17:56:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:47.668 17:56:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:47.668 17:56:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:47.668 17:56:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:47.668 17:56:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:47.668 17:56:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:47.668 17:56:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:47.668 17:56:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:47.668 17:56:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:47.668 17:56:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:47.668 17:56:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:47.668 17:56:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:47.668 17:56:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:26:47.668 17:56:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:47.668 17:56:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:47.668 17:56:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:47.668 17:56:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:47.668 17:56:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:47.668 17:56:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:47.668 17:56:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:47.668 17:56:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:47.668 17:56:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:47.668 17:56:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:47.668 17:56:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:47.668 17:56:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:47.668 17:56:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:47.668 17:56:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:47.668 17:56:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:26:47.668 17:56:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:47.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.669 --rc genhtml_branch_coverage=1 00:26:47.669 --rc genhtml_function_coverage=1 00:26:47.669 --rc genhtml_legend=1 00:26:47.669 --rc geninfo_all_blocks=1 00:26:47.669 --rc geninfo_unexecuted_blocks=1 00:26:47.669 00:26:47.669 ' 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:47.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.669 --rc genhtml_branch_coverage=1 00:26:47.669 --rc genhtml_function_coverage=1 00:26:47.669 --rc genhtml_legend=1 00:26:47.669 --rc geninfo_all_blocks=1 00:26:47.669 --rc geninfo_unexecuted_blocks=1 00:26:47.669 00:26:47.669 ' 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:47.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.669 --rc genhtml_branch_coverage=1 00:26:47.669 --rc genhtml_function_coverage=1 00:26:47.669 --rc genhtml_legend=1 00:26:47.669 --rc geninfo_all_blocks=1 00:26:47.669 --rc geninfo_unexecuted_blocks=1 00:26:47.669 00:26:47.669 ' 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:47.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.669 --rc genhtml_branch_coverage=1 00:26:47.669 --rc genhtml_function_coverage=1 00:26:47.669 --rc genhtml_legend=1 00:26:47.669 --rc geninfo_all_blocks=1 00:26:47.669 --rc geninfo_unexecuted_blocks=1 00:26:47.669 00:26:47.669 ' 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80827 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:47.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80827 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 80827 ']' 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:47.669 17:56:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:47.669 [2024-10-13 17:56:37.454497] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:26:47.669 [2024-10-13 17:56:37.454675] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80827 ] 00:26:47.930 [2024-10-13 17:56:37.609729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.930 [2024-10-13 17:56:37.723158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.872 17:56:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:48.872 17:56:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:26:48.872 17:56:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:48.872 17:56:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:26:48.872 17:56:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:26:48.872 17:56:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:48.872 17:56:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:26:48.872 17:56:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:48.872 17:56:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:26:48.872 17:56:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:48.872 17:56:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:26:48.872 17:56:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:48.872 17:56:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:26:48.872 17:56:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:48.872 17:56:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:26:48.872 17:56:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:48.872 17:56:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:26:48.872 17:56:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:26:48.872 17:56:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:26:48.872 17:56:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:48.872 17:56:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:26:48.872 17:56:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:26:48.872 17:56:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:26:49.133 17:56:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:26:49.133 17:56:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:26:49.133 17:56:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:26:49.133 17:56:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:26:49.133 17:56:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:49.133 17:56:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:26:49.133 17:56:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:26:49.133 17:56:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:26:49.394 17:56:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:49.394 { 00:26:49.394 "name": "basen1", 00:26:49.394 "aliases": [ 00:26:49.394 "24d75b3f-255e-4025-a9f4-ad7e0bc46011" 00:26:49.394 ], 00:26:49.394 "product_name": "NVMe disk", 00:26:49.394 "block_size": 4096, 00:26:49.394 "num_blocks": 1310720, 00:26:49.394 "uuid": "24d75b3f-255e-4025-a9f4-ad7e0bc46011", 00:26:49.394 "numa_id": -1, 00:26:49.394 "assigned_rate_limits": { 00:26:49.395 "rw_ios_per_sec": 0, 00:26:49.395 "rw_mbytes_per_sec": 0, 00:26:49.395 "r_mbytes_per_sec": 0, 00:26:49.395 "w_mbytes_per_sec": 0 00:26:49.395 }, 00:26:49.395 "claimed": true, 00:26:49.395 "claim_type": "read_many_write_one", 00:26:49.395 "zoned": false, 00:26:49.395 "supported_io_types": { 00:26:49.395 "read": true, 00:26:49.395 "write": true, 00:26:49.395 "unmap": true, 00:26:49.395 "flush": true, 00:26:49.395 "reset": true, 00:26:49.395 "nvme_admin": true, 00:26:49.395 "nvme_io": true, 00:26:49.395 "nvme_io_md": false, 00:26:49.395 "write_zeroes": true, 00:26:49.395 "zcopy": false, 00:26:49.395 "get_zone_info": false, 00:26:49.395 "zone_management": false, 00:26:49.395 "zone_append": false, 00:26:49.395 "compare": true, 00:26:49.395 "compare_and_write": false, 00:26:49.395 "abort": true, 00:26:49.395 "seek_hole": false, 00:26:49.395 "seek_data": false, 00:26:49.395 "copy": true, 00:26:49.395 "nvme_iov_md": false 00:26:49.395 }, 00:26:49.395 "driver_specific": { 00:26:49.395 "nvme": [ 00:26:49.395 { 00:26:49.395 "pci_address": "0000:00:11.0", 00:26:49.395 "trid": { 00:26:49.395 "trtype": "PCIe", 00:26:49.395 "traddr": "0000:00:11.0" 00:26:49.395 }, 00:26:49.395 "ctrlr_data": { 00:26:49.395 "cntlid": 0, 00:26:49.395 "vendor_id": "0x1b36", 00:26:49.395 "model_number": "QEMU NVMe Ctrl", 00:26:49.395 "serial_number": "12341", 00:26:49.395 "firmware_revision": "8.0.0", 00:26:49.395 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:49.395 "oacs": { 00:26:49.395 "security": 0, 00:26:49.395 "format": 1, 00:26:49.395 "firmware": 0, 00:26:49.395 "ns_manage": 1 00:26:49.395 }, 00:26:49.395 "multi_ctrlr": false, 00:26:49.395 "ana_reporting": false 00:26:49.395 }, 00:26:49.395 "vs": { 00:26:49.395 "nvme_version": "1.4" 00:26:49.395 }, 00:26:49.395 "ns_data": { 00:26:49.395 "id": 1, 00:26:49.395 "can_share": false 00:26:49.395 } 00:26:49.395 } 00:26:49.395 ], 00:26:49.395 "mp_policy": "active_passive" 00:26:49.395 } 00:26:49.395 } 00:26:49.395 ]' 00:26:49.395 17:56:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:49.395 17:56:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:26:49.395 17:56:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:49.395 17:56:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:26:49.395 17:56:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:26:49.395 17:56:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:26:49.395 17:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:26:49.395 17:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:26:49.395 17:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:26:49.395 17:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:49.395 17:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:49.656 17:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=1409c982-e896-49e1-9aed-4e02ac888fc1 00:26:49.656 17:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:26:49.656 17:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1409c982-e896-49e1-9aed-4e02ac888fc1 00:26:49.656 17:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:26:49.917 17:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=e1b3afc3-1bc1-409e-b305-a6397ac9cb34 00:26:49.917 17:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u e1b3afc3-1bc1-409e-b305-a6397ac9cb34 00:26:50.179 17:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=018004ea-7ab9-479d-ae78-7e330a43f9c2 00:26:50.179 17:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 018004ea-7ab9-479d-ae78-7e330a43f9c2 ]] 00:26:50.179 17:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 018004ea-7ab9-479d-ae78-7e330a43f9c2 5120 00:26:50.179 17:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:26:50.179 17:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:50.179 17:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=018004ea-7ab9-479d-ae78-7e330a43f9c2 00:26:50.179 17:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:26:50.179 17:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 018004ea-7ab9-479d-ae78-7e330a43f9c2 00:26:50.179 17:56:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=018004ea-7ab9-479d-ae78-7e330a43f9c2 00:26:50.179 17:56:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:50.179 17:56:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:26:50.179 17:56:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:26:50.179 17:56:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 018004ea-7ab9-479d-ae78-7e330a43f9c2 00:26:50.440 17:56:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:50.440 { 00:26:50.440 "name": "018004ea-7ab9-479d-ae78-7e330a43f9c2", 00:26:50.440 "aliases": [ 00:26:50.440 "lvs/basen1p0" 00:26:50.440 ], 00:26:50.440 "product_name": "Logical Volume", 00:26:50.440 "block_size": 4096, 00:26:50.440 "num_blocks": 5242880, 00:26:50.440 "uuid": "018004ea-7ab9-479d-ae78-7e330a43f9c2", 00:26:50.440 "assigned_rate_limits": { 00:26:50.440 "rw_ios_per_sec": 0, 00:26:50.440 "rw_mbytes_per_sec": 0, 00:26:50.440 "r_mbytes_per_sec": 0, 00:26:50.440 "w_mbytes_per_sec": 0 00:26:50.440 }, 00:26:50.440 "claimed": false, 00:26:50.440 "zoned": false, 00:26:50.440 "supported_io_types": { 00:26:50.440 "read": true, 00:26:50.440 "write": true, 00:26:50.440 "unmap": true, 00:26:50.440 "flush": false, 00:26:50.440 "reset": true, 00:26:50.440 "nvme_admin": false, 00:26:50.440 "nvme_io": false, 00:26:50.440 "nvme_io_md": false, 00:26:50.440 "write_zeroes": true, 00:26:50.440 "zcopy": false, 00:26:50.440 "get_zone_info": false, 00:26:50.440 "zone_management": false, 00:26:50.440 "zone_append": false, 00:26:50.440 "compare": false, 00:26:50.440 "compare_and_write": false, 00:26:50.440 "abort": false, 00:26:50.440 "seek_hole": true, 00:26:50.440 "seek_data": true, 00:26:50.440 "copy": false, 00:26:50.440 "nvme_iov_md": false 00:26:50.440 }, 00:26:50.440 "driver_specific": { 00:26:50.440 "lvol": { 00:26:50.440 "lvol_store_uuid": "e1b3afc3-1bc1-409e-b305-a6397ac9cb34", 00:26:50.440 "base_bdev": "basen1", 00:26:50.440 "thin_provision": true, 00:26:50.440 "num_allocated_clusters": 0, 00:26:50.440 "snapshot": false, 00:26:50.440 "clone": false, 00:26:50.440 "esnap_clone": false 00:26:50.440 } 00:26:50.440 } 00:26:50.440 } 00:26:50.440 ]' 00:26:50.440 17:56:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:50.440 17:56:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:26:50.440 17:56:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:50.440 17:56:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:26:50.440 17:56:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:26:50.440 17:56:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:26:50.440 17:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:26:50.440 17:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:26:50.440 17:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:26:50.701 17:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:26:50.701 17:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:26:50.701 17:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:26:50.963 17:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:26:50.963 17:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:26:50.963 17:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 018004ea-7ab9-479d-ae78-7e330a43f9c2 -c cachen1p0 --l2p_dram_limit 2 00:26:50.963 [2024-10-13 17:56:40.760991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.963 [2024-10-13 17:56:40.761036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:50.963 [2024-10-13 17:56:40.761051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:50.963 [2024-10-13 17:56:40.761060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.963 [2024-10-13 17:56:40.761113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.963 [2024-10-13 17:56:40.761123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:50.963 [2024-10-13 17:56:40.761132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:26:50.963 [2024-10-13 17:56:40.761138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.963 [2024-10-13 17:56:40.761155] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:50.963 [2024-10-13 17:56:40.761783] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:50.963 [2024-10-13 17:56:40.761806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.963 [2024-10-13 17:56:40.761813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:50.963 [2024-10-13 17:56:40.761821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.652 ms 00:26:50.963 [2024-10-13 17:56:40.761827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.963 [2024-10-13 17:56:40.761885] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID cf3d196a-bcce-44eb-a88f-9144f9828ab0 00:26:50.963 [2024-10-13 17:56:40.763167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.963 [2024-10-13 17:56:40.763204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:26:50.963 [2024-10-13 17:56:40.763213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:26:50.963 [2024-10-13 17:56:40.763222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.963 [2024-10-13 17:56:40.770043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.963 [2024-10-13 17:56:40.770072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:50.963 [2024-10-13 17:56:40.770080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.786 ms 00:26:50.963 [2024-10-13 17:56:40.770089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.963 [2024-10-13 17:56:40.770125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.963 [2024-10-13 17:56:40.770134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:50.963 [2024-10-13 17:56:40.770141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:26:50.963 [2024-10-13 17:56:40.770153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.963 [2024-10-13 17:56:40.770197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.963 [2024-10-13 17:56:40.770207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:50.963 [2024-10-13 17:56:40.770213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:26:50.963 [2024-10-13 17:56:40.770221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.963 [2024-10-13 17:56:40.770240] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:50.963 [2024-10-13 17:56:40.773485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.963 [2024-10-13 17:56:40.773511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:50.963 [2024-10-13 17:56:40.773520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.249 ms 00:26:50.963 [2024-10-13 17:56:40.773530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.963 [2024-10-13 17:56:40.773553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.963 [2024-10-13 17:56:40.773574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:50.963 [2024-10-13 17:56:40.773582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:50.963 [2024-10-13 17:56:40.773588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.963 [2024-10-13 17:56:40.773602] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:26:50.963 [2024-10-13 17:56:40.773714] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:50.963 [2024-10-13 17:56:40.773728] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:50.963 [2024-10-13 17:56:40.773736] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:26:50.963 [2024-10-13 17:56:40.773747] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:50.963 [2024-10-13 17:56:40.773754] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:50.963 [2024-10-13 17:56:40.773762] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:50.963 [2024-10-13 17:56:40.773769] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:50.963 [2024-10-13 17:56:40.773776] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:50.963 [2024-10-13 17:56:40.773782] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:50.963 [2024-10-13 17:56:40.773790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.963 [2024-10-13 17:56:40.773798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:50.963 [2024-10-13 17:56:40.773806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.189 ms 00:26:50.963 [2024-10-13 17:56:40.773813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.963 [2024-10-13 17:56:40.773879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.963 [2024-10-13 17:56:40.773885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:50.963 [2024-10-13 17:56:40.773893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:26:50.963 [2024-10-13 17:56:40.773905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.963 [2024-10-13 17:56:40.773982] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:50.963 [2024-10-13 17:56:40.773989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:50.963 [2024-10-13 17:56:40.773999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:50.963 [2024-10-13 17:56:40.774006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.963 [2024-10-13 17:56:40.774014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:50.963 [2024-10-13 17:56:40.774019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:50.963 [2024-10-13 17:56:40.774026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:50.963 [2024-10-13 17:56:40.774031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:50.963 [2024-10-13 17:56:40.774038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:50.963 [2024-10-13 17:56:40.774043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.963 [2024-10-13 17:56:40.774049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:50.963 [2024-10-13 17:56:40.774055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:50.964 [2024-10-13 17:56:40.774061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.964 [2024-10-13 17:56:40.774066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:50.964 [2024-10-13 17:56:40.774072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:50.964 [2024-10-13 17:56:40.774080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.964 [2024-10-13 17:56:40.774089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:50.964 [2024-10-13 17:56:40.774094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:50.964 [2024-10-13 17:56:40.774100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.964 [2024-10-13 17:56:40.774106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:50.964 [2024-10-13 17:56:40.774114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:50.964 [2024-10-13 17:56:40.774118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:50.964 [2024-10-13 17:56:40.774125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:50.964 [2024-10-13 17:56:40.774130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:50.964 [2024-10-13 17:56:40.774137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:50.964 [2024-10-13 17:56:40.774142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:50.964 [2024-10-13 17:56:40.774149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:50.964 [2024-10-13 17:56:40.774153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:50.964 [2024-10-13 17:56:40.774160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:50.964 [2024-10-13 17:56:40.774165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:50.964 [2024-10-13 17:56:40.774171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:50.964 [2024-10-13 17:56:40.774177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:50.964 [2024-10-13 17:56:40.774185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:50.964 [2024-10-13 17:56:40.774190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.964 [2024-10-13 17:56:40.774196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:50.964 [2024-10-13 17:56:40.774201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:50.964 [2024-10-13 17:56:40.774208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.964 [2024-10-13 17:56:40.774213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:50.964 [2024-10-13 17:56:40.774219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:50.964 [2024-10-13 17:56:40.774224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.964 [2024-10-13 17:56:40.774230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:50.964 [2024-10-13 17:56:40.774235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:50.964 [2024-10-13 17:56:40.774242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.964 [2024-10-13 17:56:40.774246] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:50.964 [2024-10-13 17:56:40.774253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:50.964 [2024-10-13 17:56:40.774259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:50.964 [2024-10-13 17:56:40.774266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.964 [2024-10-13 17:56:40.774273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:50.964 [2024-10-13 17:56:40.774283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:50.964 [2024-10-13 17:56:40.774288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:50.964 [2024-10-13 17:56:40.774295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:50.964 [2024-10-13 17:56:40.774300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:50.964 [2024-10-13 17:56:40.774307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:50.964 [2024-10-13 17:56:40.774315] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:50.964 [2024-10-13 17:56:40.774324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:50.964 [2024-10-13 17:56:40.774331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:50.964 [2024-10-13 17:56:40.774338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:50.964 [2024-10-13 17:56:40.774344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:50.964 [2024-10-13 17:56:40.774350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:50.964 [2024-10-13 17:56:40.774356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:50.964 [2024-10-13 17:56:40.774362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:50.964 [2024-10-13 17:56:40.774368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:50.964 [2024-10-13 17:56:40.774375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:50.964 [2024-10-13 17:56:40.774380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:50.964 [2024-10-13 17:56:40.774388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:50.964 [2024-10-13 17:56:40.774394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:50.964 [2024-10-13 17:56:40.774401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:50.964 [2024-10-13 17:56:40.774406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:50.964 [2024-10-13 17:56:40.774413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:50.964 [2024-10-13 17:56:40.774418] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:50.964 [2024-10-13 17:56:40.774425] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:50.964 [2024-10-13 17:56:40.774434] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:50.964 [2024-10-13 17:56:40.774441] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:50.964 [2024-10-13 17:56:40.774446] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:50.964 [2024-10-13 17:56:40.774453] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:50.964 [2024-10-13 17:56:40.774458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.964 [2024-10-13 17:56:40.774466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:50.964 [2024-10-13 17:56:40.774472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.531 ms 00:26:50.964 [2024-10-13 17:56:40.774479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.964 [2024-10-13 17:56:40.774520] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:26:50.964 [2024-10-13 17:56:40.774532] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:26:55.172 [2024-10-13 17:56:44.759783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.172 [2024-10-13 17:56:44.760079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:55.172 [2024-10-13 17:56:44.760112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3985.246 ms 00:26:55.172 [2024-10-13 17:56:44.760126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.172 [2024-10-13 17:56:44.796941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.172 [2024-10-13 17:56:44.797018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:55.172 [2024-10-13 17:56:44.797035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.547 ms 00:26:55.172 [2024-10-13 17:56:44.797048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.172 [2024-10-13 17:56:44.797167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.172 [2024-10-13 17:56:44.797183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:55.172 [2024-10-13 17:56:44.797193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:26:55.172 [2024-10-13 17:56:44.797208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.172 [2024-10-13 17:56:44.837174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.172 [2024-10-13 17:56:44.837234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:55.172 [2024-10-13 17:56:44.837249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.927 ms 00:26:55.172 [2024-10-13 17:56:44.837261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.172 [2024-10-13 17:56:44.837304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.172 [2024-10-13 17:56:44.837317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:55.172 [2024-10-13 17:56:44.837326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:55.172 [2024-10-13 17:56:44.837340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.172 [2024-10-13 17:56:44.838114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.172 [2024-10-13 17:56:44.838156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:55.172 [2024-10-13 17:56:44.838169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.687 ms 00:26:55.172 [2024-10-13 17:56:44.838181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.172 [2024-10-13 17:56:44.838245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.172 [2024-10-13 17:56:44.838266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:55.172 [2024-10-13 17:56:44.838277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:26:55.172 [2024-10-13 17:56:44.838292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.172 [2024-10-13 17:56:44.858676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.172 [2024-10-13 17:56:44.858725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:55.172 [2024-10-13 17:56:44.858737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.358 ms 00:26:55.172 [2024-10-13 17:56:44.858751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.172 [2024-10-13 17:56:44.873698] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:55.172 [2024-10-13 17:56:44.875479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.172 [2024-10-13 17:56:44.875525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:55.172 [2024-10-13 17:56:44.875540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.628 ms 00:26:55.172 [2024-10-13 17:56:44.875549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.172 [2024-10-13 17:56:44.920427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.172 [2024-10-13 17:56:44.920485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:26:55.172 [2024-10-13 17:56:44.920504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.818 ms 00:26:55.172 [2024-10-13 17:56:44.920513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.172 [2024-10-13 17:56:44.920653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.172 [2024-10-13 17:56:44.920667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:55.172 [2024-10-13 17:56:44.920683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.064 ms 00:26:55.172 [2024-10-13 17:56:44.920696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.172 [2024-10-13 17:56:44.946029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.172 [2024-10-13 17:56:44.946079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:26:55.172 [2024-10-13 17:56:44.946097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.275 ms 00:26:55.172 [2024-10-13 17:56:44.946107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.172 [2024-10-13 17:56:44.971517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.172 [2024-10-13 17:56:44.971551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:26:55.172 [2024-10-13 17:56:44.971587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.353 ms 00:26:55.172 [2024-10-13 17:56:44.971595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.172 [2024-10-13 17:56:44.972218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.172 [2024-10-13 17:56:44.972247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:55.172 [2024-10-13 17:56:44.972260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.573 ms 00:26:55.172 [2024-10-13 17:56:44.972269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.434 [2024-10-13 17:56:45.063573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.434 [2024-10-13 17:56:45.063623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:26:55.434 [2024-10-13 17:56:45.063654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 91.256 ms 00:26:55.434 [2024-10-13 17:56:45.063664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.434 [2024-10-13 17:56:45.092015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.434 [2024-10-13 17:56:45.092061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:26:55.434 [2024-10-13 17:56:45.092090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.249 ms 00:26:55.434 [2024-10-13 17:56:45.092098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.434 [2024-10-13 17:56:45.117994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.434 [2024-10-13 17:56:45.118185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:26:55.434 [2024-10-13 17:56:45.118212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.840 ms 00:26:55.434 [2024-10-13 17:56:45.118221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.434 [2024-10-13 17:56:45.144455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.434 [2024-10-13 17:56:45.144508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:55.434 [2024-10-13 17:56:45.144525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.121 ms 00:26:55.434 [2024-10-13 17:56:45.144533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.434 [2024-10-13 17:56:45.144723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.434 [2024-10-13 17:56:45.144743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:55.434 [2024-10-13 17:56:45.144760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:55.434 [2024-10-13 17:56:45.144768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.434 [2024-10-13 17:56:45.144870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.434 [2024-10-13 17:56:45.144881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:55.434 [2024-10-13 17:56:45.144892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:26:55.434 [2024-10-13 17:56:45.144900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.434 [2024-10-13 17:56:45.146273] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4384.699 ms, result 0 00:26:55.434 { 00:26:55.434 "name": "ftl", 00:26:55.434 "uuid": "cf3d196a-bcce-44eb-a88f-9144f9828ab0" 00:26:55.434 } 00:26:55.434 17:56:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:26:55.695 [2024-10-13 17:56:45.377217] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:55.695 17:56:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:26:55.956 17:56:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:26:56.216 [2024-10-13 17:56:45.805758] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:56.216 17:56:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:26:56.216 [2024-10-13 17:56:46.023619] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:56.477 17:56:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:26:56.738 Fill FTL, iteration 1 00:26:56.738 17:56:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:26:56.738 17:56:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:26:56.738 17:56:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:26:56.738 17:56:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:26:56.738 17:56:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:26:56.738 17:56:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:26:56.738 17:56:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:26:56.738 17:56:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:26:56.738 17:56:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:26:56.738 17:56:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:56.738 17:56:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:26:56.738 17:56:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:56.738 17:56:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:56.738 17:56:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:56.738 17:56:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:56.738 17:56:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:26:56.738 17:56:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=80960 00:26:56.738 17:56:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:26:56.738 17:56:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:26:56.738 17:56:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 80960 /var/tmp/spdk.tgt.sock 00:26:56.738 17:56:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 80960 ']' 00:26:56.738 17:56:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:26:56.738 17:56:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:56.738 17:56:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:26:56.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:26:56.738 17:56:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:56.738 17:56:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:56.738 [2024-10-13 17:56:46.443845] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:26:56.738 [2024-10-13 17:56:46.444200] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80960 ] 00:26:57.035 [2024-10-13 17:56:46.594997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.035 [2024-10-13 17:56:46.706378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:57.607 17:56:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:57.607 17:56:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:26:57.608 17:56:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:26:57.869 ftln1 00:26:57.869 17:56:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:26:57.869 17:56:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:26:58.131 17:56:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:26:58.131 17:56:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 80960 00:26:58.131 17:56:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 80960 ']' 00:26:58.131 17:56:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 80960 00:26:58.131 17:56:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:26:58.131 17:56:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:58.131 17:56:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80960 00:26:58.131 killing process with pid 80960 00:26:58.131 17:56:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:58.131 17:56:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:58.131 17:56:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80960' 00:26:58.131 17:56:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 80960 00:26:58.131 17:56:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 80960 00:27:00.044 17:56:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:00.045 17:56:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:00.045 [2024-10-13 17:56:49.394909] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:27:00.045 [2024-10-13 17:56:49.395048] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81002 ] 00:27:00.045 [2024-10-13 17:56:49.546924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.045 [2024-10-13 17:56:49.656375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:01.429  [2024-10-13T17:56:52.187Z] Copying: 245/1024 [MB] (245 MBps) [2024-10-13T17:56:53.130Z] Copying: 492/1024 [MB] (247 MBps) [2024-10-13T17:56:54.074Z] Copying: 737/1024 [MB] (245 MBps) [2024-10-13T17:56:54.334Z] Copying: 970/1024 [MB] (233 MBps) [2024-10-13T17:56:54.904Z] Copying: 1024/1024 [MB] (average 242 MBps) 00:27:05.090 00:27:05.090 Calculate MD5 checksum, iteration 1 00:27:05.090 17:56:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:05.090 17:56:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:05.090 17:56:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:05.090 17:56:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:05.090 17:56:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:05.090 17:56:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:05.090 17:56:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:05.090 17:56:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:05.090 [2024-10-13 17:56:54.895033] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:27:05.090 [2024-10-13 17:56:54.895165] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81060 ] 00:27:05.350 [2024-10-13 17:56:55.045593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.350 [2024-10-13 17:56:55.147906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:06.731  [2024-10-13T17:56:57.117Z] Copying: 646/1024 [MB] (646 MBps) [2024-10-13T17:56:57.689Z] Copying: 1024/1024 [MB] (average 624 MBps) 00:27:07.875 00:27:07.875 17:56:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:07.875 17:56:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:09.780 17:56:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:09.780 Fill FTL, iteration 2 00:27:09.780 17:56:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=769c5e0ba529d0b098b08e895079eebf 00:27:09.780 17:56:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:09.780 17:56:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:09.780 17:56:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:09.780 17:56:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:09.780 17:56:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:09.780 17:56:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:09.780 17:56:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:09.780 17:56:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:09.780 17:56:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:09.780 [2024-10-13 17:56:59.390054] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:27:09.780 [2024-10-13 17:56:59.390327] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81123 ] 00:27:09.780 [2024-10-13 17:56:59.539516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.039 [2024-10-13 17:56:59.639947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:11.423  [2024-10-13T17:57:02.181Z] Copying: 243/1024 [MB] (243 MBps) [2024-10-13T17:57:03.124Z] Copying: 461/1024 [MB] (218 MBps) [2024-10-13T17:57:04.068Z] Copying: 707/1024 [MB] (246 MBps) [2024-10-13T17:57:04.356Z] Copying: 962/1024 [MB] (255 MBps) [2024-10-13T17:57:04.960Z] Copying: 1024/1024 [MB] (average 241 MBps) 00:27:15.146 00:27:15.146 Calculate MD5 checksum, iteration 2 00:27:15.146 17:57:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:15.146 17:57:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:15.146 17:57:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:15.146 17:57:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:15.146 17:57:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:15.146 17:57:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:15.146 17:57:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:15.146 17:57:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:15.146 [2024-10-13 17:57:04.888350] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:27:15.146 [2024-10-13 17:57:04.888484] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81176 ] 00:27:15.408 [2024-10-13 17:57:05.038858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.408 [2024-10-13 17:57:05.145576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.792  [2024-10-13T17:57:07.179Z] Copying: 642/1024 [MB] (642 MBps) [2024-10-13T17:57:08.126Z] Copying: 1024/1024 [MB] (average 643 MBps) 00:27:18.312 00:27:18.582 17:57:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:18.582 17:57:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:20.488 17:57:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:20.488 17:57:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=0f3dfd0960e5641872681c5bd71bc6c0 00:27:20.488 17:57:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:20.488 17:57:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:20.488 17:57:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:20.747 [2024-10-13 17:57:10.398322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.747 [2024-10-13 17:57:10.398374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:20.747 [2024-10-13 17:57:10.398385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:20.747 [2024-10-13 17:57:10.398392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.747 [2024-10-13 17:57:10.398410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.747 [2024-10-13 17:57:10.398416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:20.747 [2024-10-13 17:57:10.398422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:20.747 [2024-10-13 17:57:10.398428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.747 [2024-10-13 17:57:10.398446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.747 [2024-10-13 17:57:10.398452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:20.747 [2024-10-13 17:57:10.398459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:20.747 [2024-10-13 17:57:10.398464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.747 [2024-10-13 17:57:10.398510] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.181 ms, result 0 00:27:20.747 true 00:27:20.747 17:57:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:21.006 { 00:27:21.006 "name": "ftl", 00:27:21.006 "properties": [ 00:27:21.006 { 00:27:21.006 "name": "superblock_version", 00:27:21.006 "value": 5, 00:27:21.006 "read-only": true 00:27:21.006 }, 00:27:21.006 { 00:27:21.006 "name": "base_device", 00:27:21.006 "bands": [ 00:27:21.006 { 00:27:21.006 "id": 0, 00:27:21.006 "state": "FREE", 00:27:21.006 "validity": 0.0 00:27:21.006 }, 00:27:21.006 { 00:27:21.006 "id": 1, 00:27:21.006 "state": "FREE", 00:27:21.006 "validity": 0.0 00:27:21.006 }, 00:27:21.006 { 00:27:21.006 "id": 2, 00:27:21.006 "state": "FREE", 00:27:21.006 "validity": 0.0 00:27:21.006 }, 00:27:21.006 { 00:27:21.006 "id": 3, 00:27:21.006 "state": "FREE", 00:27:21.006 "validity": 0.0 00:27:21.006 }, 00:27:21.006 { 00:27:21.006 "id": 4, 00:27:21.006 "state": "FREE", 00:27:21.006 "validity": 0.0 00:27:21.006 }, 00:27:21.006 { 00:27:21.006 "id": 5, 00:27:21.006 "state": "FREE", 00:27:21.006 "validity": 0.0 00:27:21.006 }, 00:27:21.006 { 00:27:21.006 "id": 6, 00:27:21.006 "state": "FREE", 00:27:21.006 "validity": 0.0 00:27:21.006 }, 00:27:21.006 { 00:27:21.006 "id": 7, 00:27:21.006 "state": "FREE", 00:27:21.006 "validity": 0.0 00:27:21.006 }, 00:27:21.006 { 00:27:21.006 "id": 8, 00:27:21.006 "state": "FREE", 00:27:21.006 "validity": 0.0 00:27:21.006 }, 00:27:21.006 { 00:27:21.006 "id": 9, 00:27:21.006 "state": "FREE", 00:27:21.006 "validity": 0.0 00:27:21.006 }, 00:27:21.006 { 00:27:21.006 "id": 10, 00:27:21.006 "state": "FREE", 00:27:21.006 "validity": 0.0 00:27:21.006 }, 00:27:21.006 { 00:27:21.006 "id": 11, 00:27:21.006 "state": "FREE", 00:27:21.006 "validity": 0.0 00:27:21.006 }, 00:27:21.006 { 00:27:21.006 "id": 12, 00:27:21.006 "state": "FREE", 00:27:21.006 "validity": 0.0 00:27:21.006 }, 00:27:21.006 { 00:27:21.006 "id": 13, 00:27:21.006 "state": "FREE", 00:27:21.006 "validity": 0.0 00:27:21.006 }, 00:27:21.006 { 00:27:21.006 "id": 14, 00:27:21.006 "state": "FREE", 00:27:21.006 "validity": 0.0 00:27:21.006 }, 00:27:21.006 { 00:27:21.006 "id": 15, 00:27:21.006 "state": "FREE", 00:27:21.006 "validity": 0.0 00:27:21.006 }, 00:27:21.006 { 00:27:21.006 "id": 16, 00:27:21.006 "state": "FREE", 00:27:21.006 "validity": 0.0 00:27:21.006 }, 00:27:21.006 { 00:27:21.006 "id": 17, 00:27:21.006 "state": "FREE", 00:27:21.006 "validity": 0.0 00:27:21.006 } 00:27:21.006 ], 00:27:21.006 "read-only": true 00:27:21.006 }, 00:27:21.006 { 00:27:21.006 "name": "cache_device", 00:27:21.006 "type": "bdev", 00:27:21.006 "chunks": [ 00:27:21.006 { 00:27:21.006 "id": 0, 00:27:21.006 "state": "INACTIVE", 00:27:21.006 "utilization": 0.0 00:27:21.006 }, 00:27:21.006 { 00:27:21.006 "id": 1, 00:27:21.006 "state": "CLOSED", 00:27:21.006 "utilization": 1.0 00:27:21.006 }, 00:27:21.006 { 00:27:21.006 "id": 2, 00:27:21.006 "state": "CLOSED", 00:27:21.006 "utilization": 1.0 00:27:21.006 }, 00:27:21.006 { 00:27:21.006 "id": 3, 00:27:21.006 "state": "OPEN", 00:27:21.006 "utilization": 0.001953125 00:27:21.006 }, 00:27:21.006 { 00:27:21.006 "id": 4, 00:27:21.007 "state": "OPEN", 00:27:21.007 "utilization": 0.0 00:27:21.007 } 00:27:21.007 ], 00:27:21.007 "read-only": true 00:27:21.007 }, 00:27:21.007 { 00:27:21.007 "name": "verbose_mode", 00:27:21.007 "value": true, 00:27:21.007 "unit": "", 00:27:21.007 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:21.007 }, 00:27:21.007 { 00:27:21.007 "name": "prep_upgrade_on_shutdown", 00:27:21.007 "value": false, 00:27:21.007 "unit": "", 00:27:21.007 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:21.007 } 00:27:21.007 ] 00:27:21.007 } 00:27:21.007 17:57:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:21.007 [2024-10-13 17:57:10.714555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.007 [2024-10-13 17:57:10.714592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:21.007 [2024-10-13 17:57:10.714600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:21.007 [2024-10-13 17:57:10.714606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.007 [2024-10-13 17:57:10.714622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.007 [2024-10-13 17:57:10.714628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:21.007 [2024-10-13 17:57:10.714635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:21.007 [2024-10-13 17:57:10.714640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.007 [2024-10-13 17:57:10.714655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.007 [2024-10-13 17:57:10.714661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:21.007 [2024-10-13 17:57:10.714667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:21.007 [2024-10-13 17:57:10.714672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.007 [2024-10-13 17:57:10.714712] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.147 ms, result 0 00:27:21.007 true 00:27:21.007 17:57:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:21.007 17:57:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:21.007 17:57:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:21.266 17:57:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:21.266 17:57:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:21.266 17:57:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:21.524 [2024-10-13 17:57:11.122871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.524 [2024-10-13 17:57:11.122898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:21.524 [2024-10-13 17:57:11.122906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:21.524 [2024-10-13 17:57:11.122911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.524 [2024-10-13 17:57:11.122928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.524 [2024-10-13 17:57:11.122934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:21.524 [2024-10-13 17:57:11.122939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:21.524 [2024-10-13 17:57:11.122945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.524 [2024-10-13 17:57:11.122959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.524 [2024-10-13 17:57:11.122964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:21.524 [2024-10-13 17:57:11.122970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:21.524 [2024-10-13 17:57:11.122975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.524 [2024-10-13 17:57:11.123012] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.131 ms, result 0 00:27:21.524 true 00:27:21.524 17:57:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:21.524 { 00:27:21.524 "name": "ftl", 00:27:21.524 "properties": [ 00:27:21.524 { 00:27:21.524 "name": "superblock_version", 00:27:21.524 "value": 5, 00:27:21.524 "read-only": true 00:27:21.524 }, 00:27:21.524 { 00:27:21.524 "name": "base_device", 00:27:21.524 "bands": [ 00:27:21.524 { 00:27:21.524 "id": 0, 00:27:21.524 "state": "FREE", 00:27:21.524 "validity": 0.0 00:27:21.524 }, 00:27:21.524 { 00:27:21.524 "id": 1, 00:27:21.524 "state": "FREE", 00:27:21.524 "validity": 0.0 00:27:21.524 }, 00:27:21.524 { 00:27:21.524 "id": 2, 00:27:21.524 "state": "FREE", 00:27:21.524 "validity": 0.0 00:27:21.524 }, 00:27:21.524 { 00:27:21.524 "id": 3, 00:27:21.524 "state": "FREE", 00:27:21.524 "validity": 0.0 00:27:21.524 }, 00:27:21.524 { 00:27:21.524 "id": 4, 00:27:21.524 "state": "FREE", 00:27:21.524 "validity": 0.0 00:27:21.524 }, 00:27:21.524 { 00:27:21.524 "id": 5, 00:27:21.524 "state": "FREE", 00:27:21.524 "validity": 0.0 00:27:21.524 }, 00:27:21.524 { 00:27:21.524 "id": 6, 00:27:21.524 "state": "FREE", 00:27:21.524 "validity": 0.0 00:27:21.524 }, 00:27:21.524 { 00:27:21.524 "id": 7, 00:27:21.524 "state": "FREE", 00:27:21.524 "validity": 0.0 00:27:21.524 }, 00:27:21.524 { 00:27:21.524 "id": 8, 00:27:21.524 "state": "FREE", 00:27:21.524 "validity": 0.0 00:27:21.524 }, 00:27:21.524 { 00:27:21.524 "id": 9, 00:27:21.524 "state": "FREE", 00:27:21.524 "validity": 0.0 00:27:21.524 }, 00:27:21.524 { 00:27:21.524 "id": 10, 00:27:21.524 "state": "FREE", 00:27:21.524 "validity": 0.0 00:27:21.524 }, 00:27:21.524 { 00:27:21.524 "id": 11, 00:27:21.524 "state": "FREE", 00:27:21.524 "validity": 0.0 00:27:21.524 }, 00:27:21.524 { 00:27:21.524 "id": 12, 00:27:21.524 "state": "FREE", 00:27:21.524 "validity": 0.0 00:27:21.524 }, 00:27:21.524 { 00:27:21.524 "id": 13, 00:27:21.524 "state": "FREE", 00:27:21.524 "validity": 0.0 00:27:21.524 }, 00:27:21.524 { 00:27:21.524 "id": 14, 00:27:21.524 "state": "FREE", 00:27:21.524 "validity": 0.0 00:27:21.524 }, 00:27:21.524 { 00:27:21.524 "id": 15, 00:27:21.524 "state": "FREE", 00:27:21.524 "validity": 0.0 00:27:21.524 }, 00:27:21.524 { 00:27:21.524 "id": 16, 00:27:21.524 "state": "FREE", 00:27:21.524 "validity": 0.0 00:27:21.524 }, 00:27:21.524 { 00:27:21.524 "id": 17, 00:27:21.524 "state": "FREE", 00:27:21.524 "validity": 0.0 00:27:21.524 } 00:27:21.524 ], 00:27:21.524 "read-only": true 00:27:21.524 }, 00:27:21.524 { 00:27:21.524 "name": "cache_device", 00:27:21.524 "type": "bdev", 00:27:21.524 "chunks": [ 00:27:21.524 { 00:27:21.524 "id": 0, 00:27:21.524 "state": "INACTIVE", 00:27:21.524 "utilization": 0.0 00:27:21.524 }, 00:27:21.524 { 00:27:21.524 "id": 1, 00:27:21.524 "state": "CLOSED", 00:27:21.524 "utilization": 1.0 00:27:21.524 }, 00:27:21.524 { 00:27:21.524 "id": 2, 00:27:21.524 "state": "CLOSED", 00:27:21.524 "utilization": 1.0 00:27:21.524 }, 00:27:21.524 { 00:27:21.524 "id": 3, 00:27:21.524 "state": "OPEN", 00:27:21.524 "utilization": 0.001953125 00:27:21.524 }, 00:27:21.524 { 00:27:21.524 "id": 4, 00:27:21.524 "state": "OPEN", 00:27:21.524 "utilization": 0.0 00:27:21.524 } 00:27:21.524 ], 00:27:21.524 "read-only": true 00:27:21.524 }, 00:27:21.524 { 00:27:21.524 "name": "verbose_mode", 00:27:21.524 "value": true, 00:27:21.524 "unit": "", 00:27:21.524 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:21.525 }, 00:27:21.525 { 00:27:21.525 "name": "prep_upgrade_on_shutdown", 00:27:21.525 "value": true, 00:27:21.525 "unit": "", 00:27:21.525 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:21.525 } 00:27:21.525 ] 00:27:21.525 } 00:27:21.525 17:57:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:21.525 17:57:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80827 ]] 00:27:21.525 17:57:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80827 00:27:21.525 17:57:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 80827 ']' 00:27:21.525 17:57:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 80827 00:27:21.525 17:57:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:27:21.525 17:57:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:21.525 17:57:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80827 00:27:21.525 killing process with pid 80827 00:27:21.525 17:57:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:21.525 17:57:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:21.525 17:57:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80827' 00:27:21.525 17:57:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 80827 00:27:21.525 17:57:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 80827 00:27:22.092 [2024-10-13 17:57:11.860744] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:22.092 [2024-10-13 17:57:11.870847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:22.092 [2024-10-13 17:57:11.870970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:22.092 [2024-10-13 17:57:11.870985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:22.092 [2024-10-13 17:57:11.870992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:22.092 [2024-10-13 17:57:11.871012] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:22.092 [2024-10-13 17:57:11.873115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:22.092 [2024-10-13 17:57:11.873139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:22.092 [2024-10-13 17:57:11.873147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.092 ms 00:27:22.092 [2024-10-13 17:57:11.873154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.228 [2024-10-13 17:57:19.638822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.228 [2024-10-13 17:57:19.638875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:30.228 [2024-10-13 17:57:19.638887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7765.630 ms 00:27:30.228 [2024-10-13 17:57:19.638893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.228 [2024-10-13 17:57:19.639955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.228 [2024-10-13 17:57:19.639974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:30.228 [2024-10-13 17:57:19.639982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.049 ms 00:27:30.228 [2024-10-13 17:57:19.639988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.228 [2024-10-13 17:57:19.640861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.228 [2024-10-13 17:57:19.640881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:30.228 [2024-10-13 17:57:19.640889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.853 ms 00:27:30.228 [2024-10-13 17:57:19.640896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.228 [2024-10-13 17:57:19.649450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.228 [2024-10-13 17:57:19.649479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:30.228 [2024-10-13 17:57:19.649487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.525 ms 00:27:30.228 [2024-10-13 17:57:19.649493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.228 [2024-10-13 17:57:19.655420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.228 [2024-10-13 17:57:19.655447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:30.228 [2024-10-13 17:57:19.655455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.900 ms 00:27:30.228 [2024-10-13 17:57:19.655462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.228 [2024-10-13 17:57:19.655525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.228 [2024-10-13 17:57:19.655533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:30.228 [2024-10-13 17:57:19.655540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:27:30.228 [2024-10-13 17:57:19.655546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.228 [2024-10-13 17:57:19.663492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.228 [2024-10-13 17:57:19.663516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:30.228 [2024-10-13 17:57:19.663523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.923 ms 00:27:30.228 [2024-10-13 17:57:19.663529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.228 [2024-10-13 17:57:19.671418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.228 [2024-10-13 17:57:19.671442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:30.228 [2024-10-13 17:57:19.671449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.863 ms 00:27:30.228 [2024-10-13 17:57:19.671454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.228 [2024-10-13 17:57:19.679255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.228 [2024-10-13 17:57:19.679280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:30.228 [2024-10-13 17:57:19.679287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.776 ms 00:27:30.228 [2024-10-13 17:57:19.679292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.228 [2024-10-13 17:57:19.687015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.228 [2024-10-13 17:57:19.687039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:30.228 [2024-10-13 17:57:19.687045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.676 ms 00:27:30.228 [2024-10-13 17:57:19.687051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.228 [2024-10-13 17:57:19.687075] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:30.228 [2024-10-13 17:57:19.687085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:30.228 [2024-10-13 17:57:19.687093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:30.228 [2024-10-13 17:57:19.687106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:30.229 [2024-10-13 17:57:19.687112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:30.229 [2024-10-13 17:57:19.687119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:30.229 [2024-10-13 17:57:19.687125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:30.229 [2024-10-13 17:57:19.687130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:30.229 [2024-10-13 17:57:19.687136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:30.229 [2024-10-13 17:57:19.687142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:30.229 [2024-10-13 17:57:19.687148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:30.229 [2024-10-13 17:57:19.687154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:30.229 [2024-10-13 17:57:19.687160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:30.229 [2024-10-13 17:57:19.687165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:30.229 [2024-10-13 17:57:19.687171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:30.229 [2024-10-13 17:57:19.687177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:30.229 [2024-10-13 17:57:19.687183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:30.229 [2024-10-13 17:57:19.687188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:30.229 [2024-10-13 17:57:19.687194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:30.229 [2024-10-13 17:57:19.687202] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:30.229 [2024-10-13 17:57:19.687208] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: cf3d196a-bcce-44eb-a88f-9144f9828ab0 00:27:30.229 [2024-10-13 17:57:19.687214] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:30.229 [2024-10-13 17:57:19.687220] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:30.229 [2024-10-13 17:57:19.687225] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:30.229 [2024-10-13 17:57:19.687232] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:30.229 [2024-10-13 17:57:19.687237] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:30.229 [2024-10-13 17:57:19.687243] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:30.229 [2024-10-13 17:57:19.687249] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:30.229 [2024-10-13 17:57:19.687254] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:30.229 [2024-10-13 17:57:19.687260] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:30.229 [2024-10-13 17:57:19.687266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.229 [2024-10-13 17:57:19.687275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:30.229 [2024-10-13 17:57:19.687281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.191 ms 00:27:30.229 [2024-10-13 17:57:19.687289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.229 [2024-10-13 17:57:19.696988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.229 [2024-10-13 17:57:19.697150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:30.229 [2024-10-13 17:57:19.697163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.687 ms 00:27:30.229 [2024-10-13 17:57:19.697169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.229 [2024-10-13 17:57:19.697443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.229 [2024-10-13 17:57:19.697452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:30.229 [2024-10-13 17:57:19.697458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.259 ms 00:27:30.229 [2024-10-13 17:57:19.697463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.229 [2024-10-13 17:57:19.730229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:30.229 [2024-10-13 17:57:19.730255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:30.229 [2024-10-13 17:57:19.730264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:30.229 [2024-10-13 17:57:19.730270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.229 [2024-10-13 17:57:19.730294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:30.229 [2024-10-13 17:57:19.730301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:30.229 [2024-10-13 17:57:19.730307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:30.229 [2024-10-13 17:57:19.730313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.229 [2024-10-13 17:57:19.730369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:30.229 [2024-10-13 17:57:19.730378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:30.229 [2024-10-13 17:57:19.730384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:30.229 [2024-10-13 17:57:19.730390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.229 [2024-10-13 17:57:19.730402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:30.229 [2024-10-13 17:57:19.730411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:30.229 [2024-10-13 17:57:19.730417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:30.229 [2024-10-13 17:57:19.730423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.229 [2024-10-13 17:57:19.789984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:30.229 [2024-10-13 17:57:19.790016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:30.229 [2024-10-13 17:57:19.790024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:30.229 [2024-10-13 17:57:19.790030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.229 [2024-10-13 17:57:19.838338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:30.229 [2024-10-13 17:57:19.838378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:30.229 [2024-10-13 17:57:19.838387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:30.229 [2024-10-13 17:57:19.838394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.229 [2024-10-13 17:57:19.838445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:30.229 [2024-10-13 17:57:19.838453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:30.229 [2024-10-13 17:57:19.838459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:30.229 [2024-10-13 17:57:19.838465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.229 [2024-10-13 17:57:19.838506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:30.229 [2024-10-13 17:57:19.838513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:30.229 [2024-10-13 17:57:19.838523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:30.229 [2024-10-13 17:57:19.838529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.229 [2024-10-13 17:57:19.838616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:30.229 [2024-10-13 17:57:19.838625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:30.229 [2024-10-13 17:57:19.838632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:30.229 [2024-10-13 17:57:19.838638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.229 [2024-10-13 17:57:19.838659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:30.229 [2024-10-13 17:57:19.838666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:30.229 [2024-10-13 17:57:19.838672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:30.229 [2024-10-13 17:57:19.838680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.229 [2024-10-13 17:57:19.838709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:30.229 [2024-10-13 17:57:19.838716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:30.229 [2024-10-13 17:57:19.838722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:30.229 [2024-10-13 17:57:19.838728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.229 [2024-10-13 17:57:19.838761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:30.229 [2024-10-13 17:57:19.838768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:30.229 [2024-10-13 17:57:19.838777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:30.229 [2024-10-13 17:57:19.838783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.229 [2024-10-13 17:57:19.838875] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7967.989 ms, result 0 00:27:33.532 17:57:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:33.532 17:57:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:33.532 17:57:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:33.532 17:57:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:33.532 17:57:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:33.532 17:57:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81363 00:27:33.532 17:57:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:33.532 17:57:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81363 00:27:33.532 17:57:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:33.532 17:57:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 81363 ']' 00:27:33.532 17:57:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.532 17:57:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:33.532 17:57:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.532 17:57:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:33.532 17:57:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:33.532 [2024-10-13 17:57:23.071406] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:27:33.532 [2024-10-13 17:57:23.071869] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81363 ] 00:27:33.532 [2024-10-13 17:57:23.224230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.793 [2024-10-13 17:57:23.384886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:34.736 [2024-10-13 17:57:24.284293] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:34.736 [2024-10-13 17:57:24.284733] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:34.736 [2024-10-13 17:57:24.440267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.736 [2024-10-13 17:57:24.440336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:34.736 [2024-10-13 17:57:24.440355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:34.736 [2024-10-13 17:57:24.440365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.736 [2024-10-13 17:57:24.440431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.736 [2024-10-13 17:57:24.440447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:34.737 [2024-10-13 17:57:24.440456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:27:34.737 [2024-10-13 17:57:24.440465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.737 [2024-10-13 17:57:24.440497] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:34.737 [2024-10-13 17:57:24.441319] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:34.737 [2024-10-13 17:57:24.441349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.737 [2024-10-13 17:57:24.441358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:34.737 [2024-10-13 17:57:24.441369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.864 ms 00:27:34.737 [2024-10-13 17:57:24.441379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.737 [2024-10-13 17:57:24.443819] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:34.737 [2024-10-13 17:57:24.459577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.737 [2024-10-13 17:57:24.459639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:34.737 [2024-10-13 17:57:24.459655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.760 ms 00:27:34.737 [2024-10-13 17:57:24.459666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.737 [2024-10-13 17:57:24.459765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.737 [2024-10-13 17:57:24.459777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:34.737 [2024-10-13 17:57:24.459787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:27:34.737 [2024-10-13 17:57:24.459796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.737 [2024-10-13 17:57:24.471696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.737 [2024-10-13 17:57:24.471739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:34.737 [2024-10-13 17:57:24.471752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.800 ms 00:27:34.737 [2024-10-13 17:57:24.471768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.737 [2024-10-13 17:57:24.471847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.737 [2024-10-13 17:57:24.471857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:34.737 [2024-10-13 17:57:24.471867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:27:34.737 [2024-10-13 17:57:24.471876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.737 [2024-10-13 17:57:24.471949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.737 [2024-10-13 17:57:24.471961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:34.737 [2024-10-13 17:57:24.471970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:34.737 [2024-10-13 17:57:24.471979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.737 [2024-10-13 17:57:24.472010] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:34.737 [2024-10-13 17:57:24.476704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.737 [2024-10-13 17:57:24.476915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:34.737 [2024-10-13 17:57:24.476944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.701 ms 00:27:34.737 [2024-10-13 17:57:24.476954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.737 [2024-10-13 17:57:24.476993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.737 [2024-10-13 17:57:24.477007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:34.737 [2024-10-13 17:57:24.477017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:34.737 [2024-10-13 17:57:24.477025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.737 [2024-10-13 17:57:24.477071] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:34.737 [2024-10-13 17:57:24.477100] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:34.737 [2024-10-13 17:57:24.477141] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:34.737 [2024-10-13 17:57:24.477162] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:34.737 [2024-10-13 17:57:24.477277] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:34.737 [2024-10-13 17:57:24.477289] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:34.737 [2024-10-13 17:57:24.477301] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:34.737 [2024-10-13 17:57:24.477313] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:34.737 [2024-10-13 17:57:24.477322] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:34.737 [2024-10-13 17:57:24.477331] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:34.737 [2024-10-13 17:57:24.477340] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:34.737 [2024-10-13 17:57:24.477349] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:34.737 [2024-10-13 17:57:24.477361] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:34.737 [2024-10-13 17:57:24.477369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.737 [2024-10-13 17:57:24.477378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:34.737 [2024-10-13 17:57:24.477386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.302 ms 00:27:34.737 [2024-10-13 17:57:24.477394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.737 [2024-10-13 17:57:24.477481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.737 [2024-10-13 17:57:24.477491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:34.737 [2024-10-13 17:57:24.477498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:27:34.737 [2024-10-13 17:57:24.477507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.737 [2024-10-13 17:57:24.477639] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:34.737 [2024-10-13 17:57:24.477652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:34.737 [2024-10-13 17:57:24.477661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:34.737 [2024-10-13 17:57:24.477670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:34.737 [2024-10-13 17:57:24.477678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:34.737 [2024-10-13 17:57:24.477686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:34.737 [2024-10-13 17:57:24.477693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:34.737 [2024-10-13 17:57:24.477700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:34.737 [2024-10-13 17:57:24.477710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:34.737 [2024-10-13 17:57:24.477717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:34.737 [2024-10-13 17:57:24.477724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:34.737 [2024-10-13 17:57:24.477732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:34.737 [2024-10-13 17:57:24.477740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:34.737 [2024-10-13 17:57:24.477753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:34.737 [2024-10-13 17:57:24.477760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:34.737 [2024-10-13 17:57:24.477768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:34.737 [2024-10-13 17:57:24.477775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:34.737 [2024-10-13 17:57:24.477782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:34.737 [2024-10-13 17:57:24.477789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:34.737 [2024-10-13 17:57:24.477796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:34.737 [2024-10-13 17:57:24.477803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:34.737 [2024-10-13 17:57:24.477810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:34.737 [2024-10-13 17:57:24.477817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:34.737 [2024-10-13 17:57:24.477823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:34.737 [2024-10-13 17:57:24.477830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:34.737 [2024-10-13 17:57:24.477836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:34.737 [2024-10-13 17:57:24.477852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:34.737 [2024-10-13 17:57:24.477859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:34.737 [2024-10-13 17:57:24.477866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:34.737 [2024-10-13 17:57:24.477872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:34.737 [2024-10-13 17:57:24.477879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:34.737 [2024-10-13 17:57:24.477885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:34.737 [2024-10-13 17:57:24.477892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:34.737 [2024-10-13 17:57:24.477899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:34.737 [2024-10-13 17:57:24.477906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:34.737 [2024-10-13 17:57:24.477913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:34.737 [2024-10-13 17:57:24.477920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:34.737 [2024-10-13 17:57:24.477926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:34.737 [2024-10-13 17:57:24.477932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:34.737 [2024-10-13 17:57:24.477939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:34.737 [2024-10-13 17:57:24.477946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:34.737 [2024-10-13 17:57:24.477953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:34.737 [2024-10-13 17:57:24.477960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:34.737 [2024-10-13 17:57:24.477967] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:34.737 [2024-10-13 17:57:24.477977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:34.737 [2024-10-13 17:57:24.477987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:34.737 [2024-10-13 17:57:24.477995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:34.737 [2024-10-13 17:57:24.478003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:34.737 [2024-10-13 17:57:24.478010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:34.737 [2024-10-13 17:57:24.478017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:34.737 [2024-10-13 17:57:24.478024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:34.737 [2024-10-13 17:57:24.478030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:34.737 [2024-10-13 17:57:24.478037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:34.738 [2024-10-13 17:57:24.478046] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:34.738 [2024-10-13 17:57:24.478057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:34.738 [2024-10-13 17:57:24.478070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:34.738 [2024-10-13 17:57:24.478078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:34.738 [2024-10-13 17:57:24.478085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:34.738 [2024-10-13 17:57:24.478092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:34.738 [2024-10-13 17:57:24.478099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:34.738 [2024-10-13 17:57:24.478107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:34.738 [2024-10-13 17:57:24.478115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:34.738 [2024-10-13 17:57:24.478123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:34.738 [2024-10-13 17:57:24.478130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:34.738 [2024-10-13 17:57:24.478137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:34.738 [2024-10-13 17:57:24.478145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:34.738 [2024-10-13 17:57:24.478152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:34.738 [2024-10-13 17:57:24.478159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:34.738 [2024-10-13 17:57:24.478166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:34.738 [2024-10-13 17:57:24.478174] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:34.738 [2024-10-13 17:57:24.478183] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:34.738 [2024-10-13 17:57:24.478192] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:34.738 [2024-10-13 17:57:24.478201] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:34.738 [2024-10-13 17:57:24.478208] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:34.738 [2024-10-13 17:57:24.478215] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:34.738 [2024-10-13 17:57:24.478223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.738 [2024-10-13 17:57:24.478230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:34.738 [2024-10-13 17:57:24.478239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.676 ms 00:27:34.738 [2024-10-13 17:57:24.478247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.738 [2024-10-13 17:57:24.478292] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:34.738 [2024-10-13 17:57:24.478303] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:38.971 [2024-10-13 17:57:28.715231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.971 [2024-10-13 17:57:28.715352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:38.971 [2024-10-13 17:57:28.715374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4236.922 ms 00:27:38.971 [2024-10-13 17:57:28.715384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.971 [2024-10-13 17:57:28.753181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.971 [2024-10-13 17:57:28.753256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:38.971 [2024-10-13 17:57:28.753273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.452 ms 00:27:38.971 [2024-10-13 17:57:28.753283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.971 [2024-10-13 17:57:28.753391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.971 [2024-10-13 17:57:28.753404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:38.971 [2024-10-13 17:57:28.753416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:27:38.971 [2024-10-13 17:57:28.753433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.233 [2024-10-13 17:57:28.793609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.233 [2024-10-13 17:57:28.793672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:39.233 [2024-10-13 17:57:28.793686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.114 ms 00:27:39.233 [2024-10-13 17:57:28.793696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.233 [2024-10-13 17:57:28.793744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.233 [2024-10-13 17:57:28.793759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:39.233 [2024-10-13 17:57:28.793769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:39.233 [2024-10-13 17:57:28.793778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.233 [2024-10-13 17:57:28.794511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.233 [2024-10-13 17:57:28.794554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:39.233 [2024-10-13 17:57:28.794588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.672 ms 00:27:39.233 [2024-10-13 17:57:28.794598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.233 [2024-10-13 17:57:28.794659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.233 [2024-10-13 17:57:28.794669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:39.233 [2024-10-13 17:57:28.794685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:27:39.233 [2024-10-13 17:57:28.794694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.233 [2024-10-13 17:57:28.815638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.233 [2024-10-13 17:57:28.815695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:39.233 [2024-10-13 17:57:28.815709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.915 ms 00:27:39.234 [2024-10-13 17:57:28.815718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.234 [2024-10-13 17:57:28.831724] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:39.234 [2024-10-13 17:57:28.831782] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:39.234 [2024-10-13 17:57:28.831799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.234 [2024-10-13 17:57:28.831809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:39.234 [2024-10-13 17:57:28.831820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.947 ms 00:27:39.234 [2024-10-13 17:57:28.831829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.234 [2024-10-13 17:57:28.847108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.234 [2024-10-13 17:57:28.847167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:39.234 [2024-10-13 17:57:28.847181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.218 ms 00:27:39.234 [2024-10-13 17:57:28.847189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.234 [2024-10-13 17:57:28.860168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.234 [2024-10-13 17:57:28.860213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:39.234 [2024-10-13 17:57:28.860225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.912 ms 00:27:39.234 [2024-10-13 17:57:28.860233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.234 [2024-10-13 17:57:28.873021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.234 [2024-10-13 17:57:28.873069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:39.234 [2024-10-13 17:57:28.873081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.733 ms 00:27:39.234 [2024-10-13 17:57:28.873090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.234 [2024-10-13 17:57:28.873807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.234 [2024-10-13 17:57:28.873832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:39.234 [2024-10-13 17:57:28.873844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.589 ms 00:27:39.234 [2024-10-13 17:57:28.873858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.234 [2024-10-13 17:57:28.955085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.234 [2024-10-13 17:57:28.955174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:39.234 [2024-10-13 17:57:28.955203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 81.201 ms 00:27:39.234 [2024-10-13 17:57:28.955214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.234 [2024-10-13 17:57:28.967609] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:39.234 [2024-10-13 17:57:28.968995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.234 [2024-10-13 17:57:28.969038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:39.234 [2024-10-13 17:57:28.969055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.692 ms 00:27:39.234 [2024-10-13 17:57:28.969065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.234 [2024-10-13 17:57:28.969192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.234 [2024-10-13 17:57:28.969207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:39.234 [2024-10-13 17:57:28.969218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:39.234 [2024-10-13 17:57:28.969231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.234 [2024-10-13 17:57:28.969340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.234 [2024-10-13 17:57:28.969353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:39.234 [2024-10-13 17:57:28.969364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:27:39.234 [2024-10-13 17:57:28.969373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.234 [2024-10-13 17:57:28.969407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.234 [2024-10-13 17:57:28.969419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:39.234 [2024-10-13 17:57:28.969429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:39.234 [2024-10-13 17:57:28.969438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.234 [2024-10-13 17:57:28.969480] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:39.234 [2024-10-13 17:57:28.969495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.234 [2024-10-13 17:57:28.969504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:39.234 [2024-10-13 17:57:28.969513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:27:39.234 [2024-10-13 17:57:28.969522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.234 [2024-10-13 17:57:28.996670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.234 [2024-10-13 17:57:28.996723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:39.234 [2024-10-13 17:57:28.996739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.119 ms 00:27:39.234 [2024-10-13 17:57:28.996757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.234 [2024-10-13 17:57:28.996864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.234 [2024-10-13 17:57:28.996877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:39.234 [2024-10-13 17:57:28.996888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:27:39.234 [2024-10-13 17:57:28.996897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.234 [2024-10-13 17:57:28.998736] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4557.738 ms, result 0 00:27:39.234 [2024-10-13 17:57:29.013133] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:39.234 [2024-10-13 17:57:29.029159] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:39.234 [2024-10-13 17:57:29.037688] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:39.495 17:57:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:39.495 17:57:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:39.495 17:57:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:39.495 17:57:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:39.495 17:57:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:39.495 [2024-10-13 17:57:29.301708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.495 [2024-10-13 17:57:29.301767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:39.495 [2024-10-13 17:57:29.301784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:27:39.495 [2024-10-13 17:57:29.301793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.495 [2024-10-13 17:57:29.301818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.495 [2024-10-13 17:57:29.301832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:39.495 [2024-10-13 17:57:29.301842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:39.495 [2024-10-13 17:57:29.301851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.495 [2024-10-13 17:57:29.301874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.495 [2024-10-13 17:57:29.301883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:39.495 [2024-10-13 17:57:29.301892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:39.495 [2024-10-13 17:57:29.301901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.495 [2024-10-13 17:57:29.301971] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.256 ms, result 0 00:27:39.495 true 00:27:39.756 17:57:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:39.756 { 00:27:39.756 "name": "ftl", 00:27:39.756 "properties": [ 00:27:39.756 { 00:27:39.756 "name": "superblock_version", 00:27:39.756 "value": 5, 00:27:39.756 "read-only": true 00:27:39.756 }, 00:27:39.756 { 00:27:39.756 "name": "base_device", 00:27:39.756 "bands": [ 00:27:39.756 { 00:27:39.756 "id": 0, 00:27:39.756 "state": "CLOSED", 00:27:39.756 "validity": 1.0 00:27:39.756 }, 00:27:39.756 { 00:27:39.756 "id": 1, 00:27:39.756 "state": "CLOSED", 00:27:39.756 "validity": 1.0 00:27:39.756 }, 00:27:39.756 { 00:27:39.756 "id": 2, 00:27:39.756 "state": "CLOSED", 00:27:39.756 "validity": 0.007843137254901933 00:27:39.756 }, 00:27:39.756 { 00:27:39.756 "id": 3, 00:27:39.756 "state": "FREE", 00:27:39.756 "validity": 0.0 00:27:39.756 }, 00:27:39.756 { 00:27:39.756 "id": 4, 00:27:39.756 "state": "FREE", 00:27:39.756 "validity": 0.0 00:27:39.756 }, 00:27:39.756 { 00:27:39.756 "id": 5, 00:27:39.756 "state": "FREE", 00:27:39.756 "validity": 0.0 00:27:39.756 }, 00:27:39.756 { 00:27:39.756 "id": 6, 00:27:39.756 "state": "FREE", 00:27:39.756 "validity": 0.0 00:27:39.756 }, 00:27:39.756 { 00:27:39.756 "id": 7, 00:27:39.756 "state": "FREE", 00:27:39.756 "validity": 0.0 00:27:39.756 }, 00:27:39.756 { 00:27:39.756 "id": 8, 00:27:39.756 "state": "FREE", 00:27:39.756 "validity": 0.0 00:27:39.756 }, 00:27:39.756 { 00:27:39.756 "id": 9, 00:27:39.756 "state": "FREE", 00:27:39.756 "validity": 0.0 00:27:39.756 }, 00:27:39.756 { 00:27:39.756 "id": 10, 00:27:39.756 "state": "FREE", 00:27:39.757 "validity": 0.0 00:27:39.757 }, 00:27:39.757 { 00:27:39.757 "id": 11, 00:27:39.757 "state": "FREE", 00:27:39.757 "validity": 0.0 00:27:39.757 }, 00:27:39.757 { 00:27:39.757 "id": 12, 00:27:39.757 "state": "FREE", 00:27:39.757 "validity": 0.0 00:27:39.757 }, 00:27:39.757 { 00:27:39.757 "id": 13, 00:27:39.757 "state": "FREE", 00:27:39.757 "validity": 0.0 00:27:39.757 }, 00:27:39.757 { 00:27:39.757 "id": 14, 00:27:39.757 "state": "FREE", 00:27:39.757 "validity": 0.0 00:27:39.757 }, 00:27:39.757 { 00:27:39.757 "id": 15, 00:27:39.757 "state": "FREE", 00:27:39.757 "validity": 0.0 00:27:39.757 }, 00:27:39.757 { 00:27:39.757 "id": 16, 00:27:39.757 "state": "FREE", 00:27:39.757 "validity": 0.0 00:27:39.757 }, 00:27:39.757 { 00:27:39.757 "id": 17, 00:27:39.757 "state": "FREE", 00:27:39.757 "validity": 0.0 00:27:39.757 } 00:27:39.757 ], 00:27:39.757 "read-only": true 00:27:39.757 }, 00:27:39.757 { 00:27:39.757 "name": "cache_device", 00:27:39.757 "type": "bdev", 00:27:39.757 "chunks": [ 00:27:39.757 { 00:27:39.757 "id": 0, 00:27:39.757 "state": "INACTIVE", 00:27:39.757 "utilization": 0.0 00:27:39.757 }, 00:27:39.757 { 00:27:39.757 "id": 1, 00:27:39.757 "state": "OPEN", 00:27:39.757 "utilization": 0.0 00:27:39.757 }, 00:27:39.757 { 00:27:39.757 "id": 2, 00:27:39.757 "state": "OPEN", 00:27:39.757 "utilization": 0.0 00:27:39.757 }, 00:27:39.757 { 00:27:39.757 "id": 3, 00:27:39.757 "state": "FREE", 00:27:39.757 "utilization": 0.0 00:27:39.757 }, 00:27:39.757 { 00:27:39.757 "id": 4, 00:27:39.757 "state": "FREE", 00:27:39.757 "utilization": 0.0 00:27:39.757 } 00:27:39.757 ], 00:27:39.757 "read-only": true 00:27:39.757 }, 00:27:39.757 { 00:27:39.757 "name": "verbose_mode", 00:27:39.757 "value": true, 00:27:39.757 "unit": "", 00:27:39.757 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:39.757 }, 00:27:39.757 { 00:27:39.757 "name": "prep_upgrade_on_shutdown", 00:27:39.757 "value": false, 00:27:39.757 "unit": "", 00:27:39.757 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:39.757 } 00:27:39.757 ] 00:27:39.757 } 00:27:39.757 17:57:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:39.757 17:57:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:39.757 17:57:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:40.019 17:57:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:40.019 17:57:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:40.019 17:57:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:40.019 17:57:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:40.019 17:57:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:40.279 17:57:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:40.279 17:57:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:40.279 17:57:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:40.279 17:57:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:40.279 17:57:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:40.279 17:57:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:40.279 Validate MD5 checksum, iteration 1 00:27:40.279 17:57:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:40.279 17:57:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:40.279 17:57:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:40.279 17:57:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:40.279 17:57:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:40.279 17:57:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:40.279 17:57:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:40.279 [2024-10-13 17:57:30.079312] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:27:40.279 [2024-10-13 17:57:30.079489] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81456 ] 00:27:40.540 [2024-10-13 17:57:30.232685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.801 [2024-10-13 17:57:30.382957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:42.188  [2024-10-13T17:57:32.944Z] Copying: 512/1024 [MB] (512 MBps) [2024-10-13T17:57:33.887Z] Copying: 1024/1024 [MB] (average 568 MBps) 00:27:44.073 00:27:44.073 17:57:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:44.073 17:57:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:46.620 17:57:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:46.620 17:57:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=769c5e0ba529d0b098b08e895079eebf 00:27:46.620 17:57:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 769c5e0ba529d0b098b08e895079eebf != \7\6\9\c\5\e\0\b\a\5\2\9\d\0\b\0\9\8\b\0\8\e\8\9\5\0\7\9\e\e\b\f ]] 00:27:46.620 17:57:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:46.620 17:57:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:46.620 Validate MD5 checksum, iteration 2 00:27:46.620 17:57:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:46.620 17:57:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:46.620 17:57:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:46.620 17:57:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:46.620 17:57:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:46.620 17:57:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:46.620 17:57:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:46.620 [2024-10-13 17:57:36.024605] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:27:46.620 [2024-10-13 17:57:36.024730] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81523 ] 00:27:46.620 [2024-10-13 17:57:36.176629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.620 [2024-10-13 17:57:36.318789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:48.535  [2024-10-13T17:57:38.922Z] Copying: 499/1024 [MB] (499 MBps) [2024-10-13T17:57:45.513Z] Copying: 1024/1024 [MB] (average 551 MBps) 00:27:55.699 00:27:55.699 17:57:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:55.699 17:57:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:56.641 17:57:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:56.641 17:57:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=0f3dfd0960e5641872681c5bd71bc6c0 00:27:56.641 17:57:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 0f3dfd0960e5641872681c5bd71bc6c0 != \0\f\3\d\f\d\0\9\6\0\e\5\6\4\1\8\7\2\6\8\1\c\5\b\d\7\1\b\c\6\c\0 ]] 00:27:56.641 17:57:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:56.641 17:57:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:56.641 17:57:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:27:56.641 17:57:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 81363 ]] 00:27:56.641 17:57:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 81363 00:27:56.641 17:57:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:27:56.641 17:57:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:27:56.641 17:57:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:56.641 17:57:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:56.641 17:57:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:56.641 17:57:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81634 00:27:56.641 17:57:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:56.641 17:57:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:56.641 17:57:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81634 00:27:56.641 17:57:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 81634 ']' 00:27:56.641 17:57:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:56.641 17:57:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:56.641 17:57:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:56.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:56.641 17:57:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:56.641 17:57:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:56.902 [2024-10-13 17:57:46.493772] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:27:56.902 [2024-10-13 17:57:46.494344] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81634 ] 00:27:56.902 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 81363 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:27:56.902 [2024-10-13 17:57:46.643027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.163 [2024-10-13 17:57:46.738749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.737 [2024-10-13 17:57:47.367300] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:57.737 [2024-10-13 17:57:47.367350] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:57.737 [2024-10-13 17:57:47.516062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.737 [2024-10-13 17:57:47.516095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:57.737 [2024-10-13 17:57:47.516107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:57.737 [2024-10-13 17:57:47.516114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.737 [2024-10-13 17:57:47.516156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.737 [2024-10-13 17:57:47.516167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:57.737 [2024-10-13 17:57:47.516173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:27:57.737 [2024-10-13 17:57:47.516179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.737 [2024-10-13 17:57:47.516198] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:57.737 [2024-10-13 17:57:47.516731] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:57.737 [2024-10-13 17:57:47.516752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.737 [2024-10-13 17:57:47.516759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:57.737 [2024-10-13 17:57:47.516767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.561 ms 00:27:57.737 [2024-10-13 17:57:47.516774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.737 [2024-10-13 17:57:47.517002] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:57.737 [2024-10-13 17:57:47.531148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.737 [2024-10-13 17:57:47.531177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:57.737 [2024-10-13 17:57:47.531189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.147 ms 00:27:57.737 [2024-10-13 17:57:47.531196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.737 [2024-10-13 17:57:47.538283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.738 [2024-10-13 17:57:47.538309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:57.738 [2024-10-13 17:57:47.538318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:27:57.738 [2024-10-13 17:57:47.538324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.738 [2024-10-13 17:57:47.538611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.738 [2024-10-13 17:57:47.538625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:57.738 [2024-10-13 17:57:47.538633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.221 ms 00:27:57.738 [2024-10-13 17:57:47.538639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.738 [2024-10-13 17:57:47.538682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.738 [2024-10-13 17:57:47.538689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:57.738 [2024-10-13 17:57:47.538695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:27:57.738 [2024-10-13 17:57:47.538704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.738 [2024-10-13 17:57:47.538726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.738 [2024-10-13 17:57:47.538733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:57.738 [2024-10-13 17:57:47.538739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:57.738 [2024-10-13 17:57:47.538744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.738 [2024-10-13 17:57:47.538760] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:57.738 [2024-10-13 17:57:47.541135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.738 [2024-10-13 17:57:47.541153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:57.738 [2024-10-13 17:57:47.541160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.379 ms 00:27:57.738 [2024-10-13 17:57:47.541166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.738 [2024-10-13 17:57:47.541192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.738 [2024-10-13 17:57:47.541198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:57.738 [2024-10-13 17:57:47.541207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:57.738 [2024-10-13 17:57:47.541213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.738 [2024-10-13 17:57:47.541229] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:57.738 [2024-10-13 17:57:47.541245] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:57.738 [2024-10-13 17:57:47.541273] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:57.738 [2024-10-13 17:57:47.541286] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:57.738 [2024-10-13 17:57:47.541368] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:57.738 [2024-10-13 17:57:47.541379] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:57.738 [2024-10-13 17:57:47.541387] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:57.738 [2024-10-13 17:57:47.541395] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:57.738 [2024-10-13 17:57:47.541404] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:57.738 [2024-10-13 17:57:47.541411] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:57.738 [2024-10-13 17:57:47.541416] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:57.738 [2024-10-13 17:57:47.541423] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:57.738 [2024-10-13 17:57:47.541429] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:57.738 [2024-10-13 17:57:47.541436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.738 [2024-10-13 17:57:47.541442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:57.738 [2024-10-13 17:57:47.541448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.208 ms 00:27:57.738 [2024-10-13 17:57:47.541456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.738 [2024-10-13 17:57:47.541521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.738 [2024-10-13 17:57:47.541527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:57.738 [2024-10-13 17:57:47.541533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:27:57.738 [2024-10-13 17:57:47.541538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.738 [2024-10-13 17:57:47.541625] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:57.738 [2024-10-13 17:57:47.541634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:57.738 [2024-10-13 17:57:47.541641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:57.738 [2024-10-13 17:57:47.541647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:57.738 [2024-10-13 17:57:47.541656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:57.738 [2024-10-13 17:57:47.541662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:57.738 [2024-10-13 17:57:47.541667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:57.738 [2024-10-13 17:57:47.541672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:57.738 [2024-10-13 17:57:47.541677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:57.738 [2024-10-13 17:57:47.541682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:57.738 [2024-10-13 17:57:47.541688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:57.738 [2024-10-13 17:57:47.541693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:57.738 [2024-10-13 17:57:47.541698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:57.738 [2024-10-13 17:57:47.541705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:57.738 [2024-10-13 17:57:47.541710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:57.738 [2024-10-13 17:57:47.541715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:57.738 [2024-10-13 17:57:47.541720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:57.738 [2024-10-13 17:57:47.541725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:57.738 [2024-10-13 17:57:47.541730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:57.738 [2024-10-13 17:57:47.541737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:57.738 [2024-10-13 17:57:47.541744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:57.738 [2024-10-13 17:57:47.541749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:57.738 [2024-10-13 17:57:47.541754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:57.738 [2024-10-13 17:57:47.541759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:57.738 [2024-10-13 17:57:47.541770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:57.738 [2024-10-13 17:57:47.541776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:57.738 [2024-10-13 17:57:47.541781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:57.738 [2024-10-13 17:57:47.541786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:57.738 [2024-10-13 17:57:47.541792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:57.738 [2024-10-13 17:57:47.541798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:57.738 [2024-10-13 17:57:47.541804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:57.738 [2024-10-13 17:57:47.541810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:57.738 [2024-10-13 17:57:47.541815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:57.738 [2024-10-13 17:57:47.541820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:57.738 [2024-10-13 17:57:47.541825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:57.738 [2024-10-13 17:57:47.541830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:57.738 [2024-10-13 17:57:47.541836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:57.738 [2024-10-13 17:57:47.541842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:57.738 [2024-10-13 17:57:47.541847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:57.738 [2024-10-13 17:57:47.541851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:57.738 [2024-10-13 17:57:47.541856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:57.738 [2024-10-13 17:57:47.541861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:57.738 [2024-10-13 17:57:47.541866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:57.738 [2024-10-13 17:57:47.541871] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:57.738 [2024-10-13 17:57:47.541878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:57.738 [2024-10-13 17:57:47.541883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:57.738 [2024-10-13 17:57:47.541888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:57.738 [2024-10-13 17:57:47.541894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:57.738 [2024-10-13 17:57:47.541899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:57.738 [2024-10-13 17:57:47.541905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:57.738 [2024-10-13 17:57:47.541911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:57.738 [2024-10-13 17:57:47.541920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:57.738 [2024-10-13 17:57:47.541925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:57.738 [2024-10-13 17:57:47.541932] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:57.738 [2024-10-13 17:57:47.541939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:57.738 [2024-10-13 17:57:47.541946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:57.738 [2024-10-13 17:57:47.541952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:57.738 [2024-10-13 17:57:47.541957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:57.738 [2024-10-13 17:57:47.541962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:57.738 [2024-10-13 17:57:47.541969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:57.738 [2024-10-13 17:57:47.541974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:57.738 [2024-10-13 17:57:47.541980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:57.738 [2024-10-13 17:57:47.541986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:57.739 [2024-10-13 17:57:47.541991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:57.739 [2024-10-13 17:57:47.541997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:57.739 [2024-10-13 17:57:47.542003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:57.739 [2024-10-13 17:57:47.542008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:57.739 [2024-10-13 17:57:47.542013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:57.739 [2024-10-13 17:57:47.542020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:57.739 [2024-10-13 17:57:47.542025] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:57.739 [2024-10-13 17:57:47.542032] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:57.739 [2024-10-13 17:57:47.542038] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:57.739 [2024-10-13 17:57:47.542043] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:57.739 [2024-10-13 17:57:47.542048] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:57.739 [2024-10-13 17:57:47.542055] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:57.739 [2024-10-13 17:57:47.542061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.739 [2024-10-13 17:57:47.542066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:57.739 [2024-10-13 17:57:47.542072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.489 ms 00:27:57.739 [2024-10-13 17:57:47.542080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.001 [2024-10-13 17:57:47.564104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.001 [2024-10-13 17:57:47.564131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:58.001 [2024-10-13 17:57:47.564143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.984 ms 00:27:58.001 [2024-10-13 17:57:47.564151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.001 [2024-10-13 17:57:47.564186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.001 [2024-10-13 17:57:47.564194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:58.001 [2024-10-13 17:57:47.564201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:58.001 [2024-10-13 17:57:47.564207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.001 [2024-10-13 17:57:47.590989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.001 [2024-10-13 17:57:47.591016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:58.001 [2024-10-13 17:57:47.591025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.737 ms 00:27:58.001 [2024-10-13 17:57:47.591032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.001 [2024-10-13 17:57:47.591056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.001 [2024-10-13 17:57:47.591063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:58.001 [2024-10-13 17:57:47.591070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:58.001 [2024-10-13 17:57:47.591076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.001 [2024-10-13 17:57:47.591152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.001 [2024-10-13 17:57:47.591165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:58.001 [2024-10-13 17:57:47.591173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:27:58.001 [2024-10-13 17:57:47.591179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.001 [2024-10-13 17:57:47.591214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.001 [2024-10-13 17:57:47.591221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:58.001 [2024-10-13 17:57:47.591228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:27:58.001 [2024-10-13 17:57:47.591234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.001 [2024-10-13 17:57:47.604711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.001 [2024-10-13 17:57:47.604737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:58.001 [2024-10-13 17:57:47.604747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.460 ms 00:27:58.001 [2024-10-13 17:57:47.604754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.001 [2024-10-13 17:57:47.604838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.001 [2024-10-13 17:57:47.604848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:27:58.001 [2024-10-13 17:57:47.604855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:58.001 [2024-10-13 17:57:47.604861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.001 [2024-10-13 17:57:47.632543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.001 [2024-10-13 17:57:47.632589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:27:58.001 [2024-10-13 17:57:47.632601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.666 ms 00:27:58.001 [2024-10-13 17:57:47.632609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.001 [2024-10-13 17:57:47.639920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.001 [2024-10-13 17:57:47.640047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:58.001 [2024-10-13 17:57:47.640119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.421 ms 00:27:58.001 [2024-10-13 17:57:47.640157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.001 [2024-10-13 17:57:47.686631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.001 [2024-10-13 17:57:47.686795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:58.001 [2024-10-13 17:57:47.686852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.396 ms 00:27:58.001 [2024-10-13 17:57:47.686901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.001 [2024-10-13 17:57:47.687067] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:27:58.001 [2024-10-13 17:57:47.687216] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:27:58.001 [2024-10-13 17:57:47.687365] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:27:58.001 [2024-10-13 17:57:47.687495] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:27:58.001 [2024-10-13 17:57:47.687539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.001 [2024-10-13 17:57:47.687609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:27:58.001 [2024-10-13 17:57:47.687662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.572 ms 00:27:58.001 [2024-10-13 17:57:47.687719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.001 [2024-10-13 17:57:47.687812] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:27:58.001 [2024-10-13 17:57:47.687857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.001 [2024-10-13 17:57:47.687895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:27:58.001 [2024-10-13 17:57:47.687927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:27:58.001 [2024-10-13 17:57:47.687962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.001 [2024-10-13 17:57:47.700316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.001 [2024-10-13 17:57:47.700411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:27:58.001 [2024-10-13 17:57:47.700454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.305 ms 00:27:58.001 [2024-10-13 17:57:47.700496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.001 [2024-10-13 17:57:47.707159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.001 [2024-10-13 17:57:47.707230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:27:58.001 [2024-10-13 17:57:47.707278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:27:58.001 [2024-10-13 17:57:47.707309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.001 [2024-10-13 17:57:47.707404] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:27:58.001 [2024-10-13 17:57:47.707631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.001 [2024-10-13 17:57:47.707682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:27:58.001 [2024-10-13 17:57:47.707726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.228 ms 00:27:58.001 [2024-10-13 17:57:47.707764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.951 [2024-10-13 17:57:48.471577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.951 [2024-10-13 17:57:48.471662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:27:58.951 [2024-10-13 17:57:48.471676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 763.162 ms 00:27:58.951 [2024-10-13 17:57:48.471683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.951 [2024-10-13 17:57:48.475536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.951 [2024-10-13 17:57:48.475573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:27:58.951 [2024-10-13 17:57:48.475582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.344 ms 00:27:58.951 [2024-10-13 17:57:48.475589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.951 [2024-10-13 17:57:48.476597] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:27:58.951 [2024-10-13 17:57:48.476625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.951 [2024-10-13 17:57:48.476638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:27:58.951 [2024-10-13 17:57:48.476646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.004 ms 00:27:58.951 [2024-10-13 17:57:48.476653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.951 [2024-10-13 17:57:48.476681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.951 [2024-10-13 17:57:48.476688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:27:58.951 [2024-10-13 17:57:48.476695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:58.951 [2024-10-13 17:57:48.476702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.951 [2024-10-13 17:57:48.476728] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 769.330 ms, result 0 00:27:58.951 [2024-10-13 17:57:48.476762] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:27:58.951 [2024-10-13 17:57:48.476857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.951 [2024-10-13 17:57:48.476866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:27:58.951 [2024-10-13 17:57:48.476873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.096 ms 00:27:58.951 [2024-10-13 17:57:48.476879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.557 [2024-10-13 17:57:49.043092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.557 [2024-10-13 17:57:49.043158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:27:59.557 [2024-10-13 17:57:49.043171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 565.334 ms 00:27:59.557 [2024-10-13 17:57:49.043178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.557 [2024-10-13 17:57:49.046763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.557 [2024-10-13 17:57:49.046802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:27:59.557 [2024-10-13 17:57:49.046811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.993 ms 00:27:59.557 [2024-10-13 17:57:49.046818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.557 [2024-10-13 17:57:49.047388] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:27:59.557 [2024-10-13 17:57:49.047425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.557 [2024-10-13 17:57:49.047433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:27:59.557 [2024-10-13 17:57:49.047442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.585 ms 00:27:59.557 [2024-10-13 17:57:49.047448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.557 [2024-10-13 17:57:49.047480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.557 [2024-10-13 17:57:49.047488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:27:59.557 [2024-10-13 17:57:49.047495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:59.557 [2024-10-13 17:57:49.047501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.558 [2024-10-13 17:57:49.047532] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 570.763 ms, result 0 00:27:59.558 [2024-10-13 17:57:49.047582] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:59.558 [2024-10-13 17:57:49.047593] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:59.558 [2024-10-13 17:57:49.047611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.558 [2024-10-13 17:57:49.047618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:27:59.558 [2024-10-13 17:57:49.047625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1340.226 ms 00:27:59.558 [2024-10-13 17:57:49.047631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.558 [2024-10-13 17:57:49.047656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.558 [2024-10-13 17:57:49.047663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:27:59.558 [2024-10-13 17:57:49.047671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:59.558 [2024-10-13 17:57:49.047678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.558 [2024-10-13 17:57:49.057369] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:59.558 [2024-10-13 17:57:49.057460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.558 [2024-10-13 17:57:49.057468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:59.558 [2024-10-13 17:57:49.057475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.766 ms 00:27:59.558 [2024-10-13 17:57:49.057482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.558 [2024-10-13 17:57:49.058048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.558 [2024-10-13 17:57:49.058065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:27:59.558 [2024-10-13 17:57:49.058072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.507 ms 00:27:59.558 [2024-10-13 17:57:49.058078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.558 [2024-10-13 17:57:49.059789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.558 [2024-10-13 17:57:49.059807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:27:59.558 [2024-10-13 17:57:49.059816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.694 ms 00:27:59.558 [2024-10-13 17:57:49.059822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.558 [2024-10-13 17:57:49.059855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.558 [2024-10-13 17:57:49.059863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:27:59.558 [2024-10-13 17:57:49.059870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:59.558 [2024-10-13 17:57:49.059876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.558 [2024-10-13 17:57:49.059962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.558 [2024-10-13 17:57:49.059973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:59.558 [2024-10-13 17:57:49.059979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:27:59.558 [2024-10-13 17:57:49.059986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.558 [2024-10-13 17:57:49.060007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.558 [2024-10-13 17:57:49.060013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:59.558 [2024-10-13 17:57:49.060019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:59.558 [2024-10-13 17:57:49.060025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.558 [2024-10-13 17:57:49.060048] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:59.558 [2024-10-13 17:57:49.060056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.558 [2024-10-13 17:57:49.060062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:59.558 [2024-10-13 17:57:49.060070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:59.558 [2024-10-13 17:57:49.060076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.558 [2024-10-13 17:57:49.060119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.558 [2024-10-13 17:57:49.060126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:59.558 [2024-10-13 17:57:49.060133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:27:59.558 [2024-10-13 17:57:49.060139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.558 [2024-10-13 17:57:49.061300] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1544.814 ms, result 0 00:27:59.558 [2024-10-13 17:57:49.073719] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:59.558 [2024-10-13 17:57:49.089722] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:59.558 [2024-10-13 17:57:49.097849] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:59.558 17:57:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:59.558 17:57:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:59.558 17:57:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:59.558 17:57:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:59.558 17:57:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:27:59.558 17:57:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:59.558 17:57:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:59.558 17:57:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:59.558 Validate MD5 checksum, iteration 1 00:27:59.558 17:57:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:59.558 17:57:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:59.558 17:57:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:59.558 17:57:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:59.558 17:57:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:59.558 17:57:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:59.558 17:57:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:59.558 [2024-10-13 17:57:49.191122] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:27:59.558 [2024-10-13 17:57:49.191238] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81670 ] 00:27:59.558 [2024-10-13 17:57:49.338855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.819 [2024-10-13 17:57:49.475081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:01.734  [2024-10-13T17:57:51.809Z] Copying: 589/1024 [MB] (589 MBps) [2024-10-13T17:57:52.752Z] Copying: 1024/1024 [MB] (average 624 MBps) 00:28:02.938 00:28:02.938 17:57:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:02.938 17:57:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:04.853 17:57:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:04.853 17:57:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=769c5e0ba529d0b098b08e895079eebf 00:28:04.853 17:57:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 769c5e0ba529d0b098b08e895079eebf != \7\6\9\c\5\e\0\b\a\5\2\9\d\0\b\0\9\8\b\0\8\e\8\9\5\0\7\9\e\e\b\f ]] 00:28:04.853 17:57:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:04.853 17:57:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:04.853 17:57:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:04.853 Validate MD5 checksum, iteration 2 00:28:04.853 17:57:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:04.853 17:57:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:04.853 17:57:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:04.853 17:57:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:04.853 17:57:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:04.853 17:57:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:04.853 [2024-10-13 17:57:54.424462] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:28:04.853 [2024-10-13 17:57:54.424565] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81728 ] 00:28:04.853 [2024-10-13 17:57:54.566157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.114 [2024-10-13 17:57:54.662585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.500  [2024-10-13T17:57:56.886Z] Copying: 658/1024 [MB] (658 MBps) [2024-10-13T17:57:57.458Z] Copying: 1024/1024 [MB] (average 665 MBps) 00:28:07.644 00:28:07.644 17:57:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:07.644 17:57:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:10.190 17:57:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:10.190 17:57:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=0f3dfd0960e5641872681c5bd71bc6c0 00:28:10.190 17:57:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 0f3dfd0960e5641872681c5bd71bc6c0 != \0\f\3\d\f\d\0\9\6\0\e\5\6\4\1\8\7\2\6\8\1\c\5\b\d\7\1\b\c\6\c\0 ]] 00:28:10.190 17:57:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:10.190 17:57:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:10.190 17:57:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:10.190 17:57:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:10.190 17:57:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:10.190 17:57:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:10.190 17:57:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:10.190 17:57:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:10.190 17:57:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:10.190 17:57:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:10.190 17:57:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 81634 ]] 00:28:10.190 17:57:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 81634 00:28:10.190 17:57:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 81634 ']' 00:28:10.190 17:57:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 81634 00:28:10.190 17:57:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:28:10.190 17:57:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:10.190 17:57:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81634 00:28:10.190 killing process with pid 81634 00:28:10.190 17:57:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:10.190 17:57:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:10.190 17:57:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81634' 00:28:10.190 17:57:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 81634 00:28:10.190 17:57:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 81634 00:28:10.451 [2024-10-13 17:58:00.065617] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:10.451 [2024-10-13 17:58:00.075914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.451 [2024-10-13 17:58:00.075956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:10.451 [2024-10-13 17:58:00.075968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:10.451 [2024-10-13 17:58:00.075975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.451 [2024-10-13 17:58:00.075995] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:10.451 [2024-10-13 17:58:00.078200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.451 [2024-10-13 17:58:00.078227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:10.451 [2024-10-13 17:58:00.078236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.194 ms 00:28:10.451 [2024-10-13 17:58:00.078243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.451 [2024-10-13 17:58:00.078455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.451 [2024-10-13 17:58:00.078474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:10.451 [2024-10-13 17:58:00.078482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.192 ms 00:28:10.451 [2024-10-13 17:58:00.078488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.451 [2024-10-13 17:58:00.079653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.451 [2024-10-13 17:58:00.079678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:10.451 [2024-10-13 17:58:00.079686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.153 ms 00:28:10.451 [2024-10-13 17:58:00.079693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.451 [2024-10-13 17:58:00.080548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.451 [2024-10-13 17:58:00.080576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:10.451 [2024-10-13 17:58:00.080588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.830 ms 00:28:10.451 [2024-10-13 17:58:00.080594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.451 [2024-10-13 17:58:00.088143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.451 [2024-10-13 17:58:00.088171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:10.451 [2024-10-13 17:58:00.088180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.518 ms 00:28:10.451 [2024-10-13 17:58:00.088186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.451 [2024-10-13 17:58:00.092632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.451 [2024-10-13 17:58:00.092665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:10.451 [2024-10-13 17:58:00.092674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.417 ms 00:28:10.451 [2024-10-13 17:58:00.092681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.451 [2024-10-13 17:58:00.092748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.451 [2024-10-13 17:58:00.092756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:10.451 [2024-10-13 17:58:00.092762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:28:10.451 [2024-10-13 17:58:00.092768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.451 [2024-10-13 17:58:00.100031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.451 [2024-10-13 17:58:00.100057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:10.451 [2024-10-13 17:58:00.100065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.248 ms 00:28:10.451 [2024-10-13 17:58:00.100070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.451 [2024-10-13 17:58:00.107413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.451 [2024-10-13 17:58:00.107439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:10.451 [2024-10-13 17:58:00.107446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.314 ms 00:28:10.451 [2024-10-13 17:58:00.107452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.451 [2024-10-13 17:58:00.114810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.451 [2024-10-13 17:58:00.114837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:10.451 [2024-10-13 17:58:00.114844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.331 ms 00:28:10.451 [2024-10-13 17:58:00.114850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.451 [2024-10-13 17:58:00.122154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.451 [2024-10-13 17:58:00.122180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:10.451 [2024-10-13 17:58:00.122187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.248 ms 00:28:10.451 [2024-10-13 17:58:00.122194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.451 [2024-10-13 17:58:00.122221] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:10.451 [2024-10-13 17:58:00.122233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:10.451 [2024-10-13 17:58:00.122241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:10.451 [2024-10-13 17:58:00.122247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:10.451 [2024-10-13 17:58:00.122254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:10.451 [2024-10-13 17:58:00.122260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:10.451 [2024-10-13 17:58:00.122266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:10.451 [2024-10-13 17:58:00.122272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:10.451 [2024-10-13 17:58:00.122278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:10.451 [2024-10-13 17:58:00.122284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:10.451 [2024-10-13 17:58:00.122290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:10.451 [2024-10-13 17:58:00.122295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:10.451 [2024-10-13 17:58:00.122301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:10.451 [2024-10-13 17:58:00.122307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:10.451 [2024-10-13 17:58:00.122312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:10.451 [2024-10-13 17:58:00.122319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:10.451 [2024-10-13 17:58:00.122325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:10.451 [2024-10-13 17:58:00.122331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:10.451 [2024-10-13 17:58:00.122337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:10.451 [2024-10-13 17:58:00.122344] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:10.451 [2024-10-13 17:58:00.122355] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: cf3d196a-bcce-44eb-a88f-9144f9828ab0 00:28:10.451 [2024-10-13 17:58:00.122362] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:10.451 [2024-10-13 17:58:00.122368] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:10.452 [2024-10-13 17:58:00.122374] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:10.452 [2024-10-13 17:58:00.122381] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:10.452 [2024-10-13 17:58:00.122387] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:10.452 [2024-10-13 17:58:00.122393] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:10.452 [2024-10-13 17:58:00.122399] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:10.452 [2024-10-13 17:58:00.122405] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:10.452 [2024-10-13 17:58:00.122415] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:10.452 [2024-10-13 17:58:00.122422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.452 [2024-10-13 17:58:00.122428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:10.452 [2024-10-13 17:58:00.122435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.202 ms 00:28:10.452 [2024-10-13 17:58:00.122441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.452 [2024-10-13 17:58:00.132332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.452 [2024-10-13 17:58:00.132359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:10.452 [2024-10-13 17:58:00.132366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.875 ms 00:28:10.452 [2024-10-13 17:58:00.132373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.452 [2024-10-13 17:58:00.132677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.452 [2024-10-13 17:58:00.132692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:10.452 [2024-10-13 17:58:00.132703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.288 ms 00:28:10.452 [2024-10-13 17:58:00.132709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.452 [2024-10-13 17:58:00.167656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:10.452 [2024-10-13 17:58:00.167684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:10.452 [2024-10-13 17:58:00.167692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:10.452 [2024-10-13 17:58:00.167699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.452 [2024-10-13 17:58:00.167723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:10.452 [2024-10-13 17:58:00.167730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:10.452 [2024-10-13 17:58:00.167740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:10.452 [2024-10-13 17:58:00.167747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.452 [2024-10-13 17:58:00.167816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:10.452 [2024-10-13 17:58:00.167825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:10.452 [2024-10-13 17:58:00.167832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:10.452 [2024-10-13 17:58:00.167839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.452 [2024-10-13 17:58:00.167853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:10.452 [2024-10-13 17:58:00.167861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:10.452 [2024-10-13 17:58:00.167867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:10.452 [2024-10-13 17:58:00.167876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.452 [2024-10-13 17:58:00.232129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:10.452 [2024-10-13 17:58:00.232163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:10.452 [2024-10-13 17:58:00.232172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:10.452 [2024-10-13 17:58:00.232179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.713 [2024-10-13 17:58:00.283527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:10.713 [2024-10-13 17:58:00.283572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:10.713 [2024-10-13 17:58:00.283581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:10.713 [2024-10-13 17:58:00.283605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.713 [2024-10-13 17:58:00.283671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:10.713 [2024-10-13 17:58:00.283679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:10.713 [2024-10-13 17:58:00.283686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:10.713 [2024-10-13 17:58:00.283693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.713 [2024-10-13 17:58:00.283744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:10.713 [2024-10-13 17:58:00.283753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:10.713 [2024-10-13 17:58:00.283760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:10.713 [2024-10-13 17:58:00.283766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.713 [2024-10-13 17:58:00.283853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:10.713 [2024-10-13 17:58:00.283862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:10.713 [2024-10-13 17:58:00.283871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:10.713 [2024-10-13 17:58:00.283877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.713 [2024-10-13 17:58:00.283909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:10.713 [2024-10-13 17:58:00.283917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:10.713 [2024-10-13 17:58:00.283923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:10.713 [2024-10-13 17:58:00.283929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.713 [2024-10-13 17:58:00.283966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:10.713 [2024-10-13 17:58:00.283974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:10.713 [2024-10-13 17:58:00.283981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:10.713 [2024-10-13 17:58:00.283987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.713 [2024-10-13 17:58:00.284028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:10.713 [2024-10-13 17:58:00.284037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:10.713 [2024-10-13 17:58:00.284044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:10.713 [2024-10-13 17:58:00.284050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.713 [2024-10-13 17:58:00.284162] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 208.219 ms, result 0 00:28:11.286 17:58:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:11.286 17:58:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:11.286 17:58:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:11.286 17:58:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:11.286 17:58:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:11.286 17:58:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:11.286 Remove shared memory files 00:28:11.286 17:58:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:11.286 17:58:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:11.286 17:58:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:11.286 17:58:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:11.286 17:58:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid81363 00:28:11.286 17:58:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:11.286 17:58:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:11.286 00:28:11.286 real 1m23.807s 00:28:11.286 user 1m53.585s 00:28:11.286 sys 0m20.243s 00:28:11.286 17:58:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:11.286 17:58:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:11.286 ************************************ 00:28:11.286 END TEST ftl_upgrade_shutdown 00:28:11.286 ************************************ 00:28:11.286 17:58:01 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:28:11.286 17:58:01 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:11.286 17:58:01 ftl -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:28:11.286 17:58:01 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:11.286 17:58:01 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:11.286 ************************************ 00:28:11.286 START TEST ftl_restore_fast 00:28:11.286 ************************************ 00:28:11.286 17:58:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:11.547 * Looking for test storage... 00:28:11.547 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:11.547 17:58:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:11.547 17:58:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1691 -- # lcov --version 00:28:11.547 17:58:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:11.547 17:58:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:11.547 17:58:01 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:11.547 17:58:01 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:11.547 17:58:01 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:11.547 17:58:01 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:28:11.547 17:58:01 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:28:11.547 17:58:01 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:28:11.547 17:58:01 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:28:11.547 17:58:01 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:28:11.547 17:58:01 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:28:11.547 17:58:01 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:28:11.547 17:58:01 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:11.547 17:58:01 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:28:11.547 17:58:01 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:28:11.547 17:58:01 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:11.547 17:58:01 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:11.547 17:58:01 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:28:11.547 17:58:01 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:28:11.547 17:58:01 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:11.547 17:58:01 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:28:11.547 17:58:01 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:28:11.547 17:58:01 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:28:11.547 17:58:01 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:28:11.547 17:58:01 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:11.547 17:58:01 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:28:11.547 17:58:01 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:28:11.547 17:58:01 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:11.547 17:58:01 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:11.547 17:58:01 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:28:11.547 17:58:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:11.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.548 --rc genhtml_branch_coverage=1 00:28:11.548 --rc genhtml_function_coverage=1 00:28:11.548 --rc genhtml_legend=1 00:28:11.548 --rc geninfo_all_blocks=1 00:28:11.548 --rc geninfo_unexecuted_blocks=1 00:28:11.548 00:28:11.548 ' 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:11.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.548 --rc genhtml_branch_coverage=1 00:28:11.548 --rc genhtml_function_coverage=1 00:28:11.548 --rc genhtml_legend=1 00:28:11.548 --rc geninfo_all_blocks=1 00:28:11.548 --rc geninfo_unexecuted_blocks=1 00:28:11.548 00:28:11.548 ' 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:11.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.548 --rc genhtml_branch_coverage=1 00:28:11.548 --rc genhtml_function_coverage=1 00:28:11.548 --rc genhtml_legend=1 00:28:11.548 --rc geninfo_all_blocks=1 00:28:11.548 --rc geninfo_unexecuted_blocks=1 00:28:11.548 00:28:11.548 ' 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:11.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.548 --rc genhtml_branch_coverage=1 00:28:11.548 --rc genhtml_function_coverage=1 00:28:11.548 --rc genhtml_legend=1 00:28:11.548 --rc geninfo_all_blocks=1 00:28:11.548 --rc geninfo_unexecuted_blocks=1 00:28:11.548 00:28:11.548 ' 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.oL3Ec0ByRf 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=81878 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 81878 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # '[' -z 81878 ']' 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:11.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:28:11.548 17:58:01 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:11.548 [2024-10-13 17:58:01.290735] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:28:11.548 [2024-10-13 17:58:01.290849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81878 ] 00:28:11.809 [2024-10-13 17:58:01.434906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:11.809 [2024-10-13 17:58:01.528493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:12.381 17:58:02 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:12.381 17:58:02 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # return 0 00:28:12.381 17:58:02 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:12.381 17:58:02 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:28:12.381 17:58:02 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:12.381 17:58:02 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:28:12.381 17:58:02 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:28:12.381 17:58:02 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:12.642 17:58:02 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:12.642 17:58:02 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:28:12.642 17:58:02 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:12.642 17:58:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:28:12.642 17:58:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:12.642 17:58:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:12.642 17:58:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:12.642 17:58:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:12.903 17:58:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:12.903 { 00:28:12.903 "name": "nvme0n1", 00:28:12.903 "aliases": [ 00:28:12.903 "43bca628-d421-4394-ac48-dd9a711d275f" 00:28:12.903 ], 00:28:12.903 "product_name": "NVMe disk", 00:28:12.903 "block_size": 4096, 00:28:12.903 "num_blocks": 1310720, 00:28:12.903 "uuid": "43bca628-d421-4394-ac48-dd9a711d275f", 00:28:12.903 "numa_id": -1, 00:28:12.903 "assigned_rate_limits": { 00:28:12.903 "rw_ios_per_sec": 0, 00:28:12.903 "rw_mbytes_per_sec": 0, 00:28:12.904 "r_mbytes_per_sec": 0, 00:28:12.904 "w_mbytes_per_sec": 0 00:28:12.904 }, 00:28:12.904 "claimed": true, 00:28:12.904 "claim_type": "read_many_write_one", 00:28:12.904 "zoned": false, 00:28:12.904 "supported_io_types": { 00:28:12.904 "read": true, 00:28:12.904 "write": true, 00:28:12.904 "unmap": true, 00:28:12.904 "flush": true, 00:28:12.904 "reset": true, 00:28:12.904 "nvme_admin": true, 00:28:12.904 "nvme_io": true, 00:28:12.904 "nvme_io_md": false, 00:28:12.904 "write_zeroes": true, 00:28:12.904 "zcopy": false, 00:28:12.904 "get_zone_info": false, 00:28:12.904 "zone_management": false, 00:28:12.904 "zone_append": false, 00:28:12.904 "compare": true, 00:28:12.904 "compare_and_write": false, 00:28:12.904 "abort": true, 00:28:12.904 "seek_hole": false, 00:28:12.904 "seek_data": false, 00:28:12.904 "copy": true, 00:28:12.904 "nvme_iov_md": false 00:28:12.904 }, 00:28:12.904 "driver_specific": { 00:28:12.904 "nvme": [ 00:28:12.904 { 00:28:12.904 "pci_address": "0000:00:11.0", 00:28:12.904 "trid": { 00:28:12.904 "trtype": "PCIe", 00:28:12.904 "traddr": "0000:00:11.0" 00:28:12.904 }, 00:28:12.904 "ctrlr_data": { 00:28:12.904 "cntlid": 0, 00:28:12.904 "vendor_id": "0x1b36", 00:28:12.904 "model_number": "QEMU NVMe Ctrl", 00:28:12.904 "serial_number": "12341", 00:28:12.904 "firmware_revision": "8.0.0", 00:28:12.904 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:12.904 "oacs": { 00:28:12.904 "security": 0, 00:28:12.904 "format": 1, 00:28:12.904 "firmware": 0, 00:28:12.904 "ns_manage": 1 00:28:12.904 }, 00:28:12.904 "multi_ctrlr": false, 00:28:12.904 "ana_reporting": false 00:28:12.904 }, 00:28:12.904 "vs": { 00:28:12.904 "nvme_version": "1.4" 00:28:12.904 }, 00:28:12.904 "ns_data": { 00:28:12.904 "id": 1, 00:28:12.904 "can_share": false 00:28:12.904 } 00:28:12.904 } 00:28:12.904 ], 00:28:12.904 "mp_policy": "active_passive" 00:28:12.904 } 00:28:12.904 } 00:28:12.904 ]' 00:28:12.904 17:58:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:12.904 17:58:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:12.904 17:58:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:12.904 17:58:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:28:12.904 17:58:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:28:12.904 17:58:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:28:12.904 17:58:02 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:28:12.904 17:58:02 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:12.904 17:58:02 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:28:12.904 17:58:02 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:12.904 17:58:02 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:13.165 17:58:02 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=e1b3afc3-1bc1-409e-b305-a6397ac9cb34 00:28:13.165 17:58:02 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:28:13.165 17:58:02 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e1b3afc3-1bc1-409e-b305-a6397ac9cb34 00:28:13.426 17:58:03 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:13.426 17:58:03 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=46e0ed76-29a7-4695-add2-33ba81c8546f 00:28:13.426 17:58:03 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 46e0ed76-29a7-4695-add2-33ba81c8546f 00:28:13.688 17:58:03 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=df6d49fe-9811-4d6b-b929-4743b523df7e 00:28:13.688 17:58:03 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:28:13.688 17:58:03 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 df6d49fe-9811-4d6b-b929-4743b523df7e 00:28:13.688 17:58:03 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:28:13.688 17:58:03 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:13.688 17:58:03 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=df6d49fe-9811-4d6b-b929-4743b523df7e 00:28:13.688 17:58:03 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:28:13.688 17:58:03 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size df6d49fe-9811-4d6b-b929-4743b523df7e 00:28:13.688 17:58:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=df6d49fe-9811-4d6b-b929-4743b523df7e 00:28:13.688 17:58:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:13.688 17:58:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:13.688 17:58:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:13.688 17:58:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b df6d49fe-9811-4d6b-b929-4743b523df7e 00:28:13.949 17:58:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:13.949 { 00:28:13.949 "name": "df6d49fe-9811-4d6b-b929-4743b523df7e", 00:28:13.949 "aliases": [ 00:28:13.949 "lvs/nvme0n1p0" 00:28:13.949 ], 00:28:13.949 "product_name": "Logical Volume", 00:28:13.949 "block_size": 4096, 00:28:13.949 "num_blocks": 26476544, 00:28:13.949 "uuid": "df6d49fe-9811-4d6b-b929-4743b523df7e", 00:28:13.949 "assigned_rate_limits": { 00:28:13.949 "rw_ios_per_sec": 0, 00:28:13.949 "rw_mbytes_per_sec": 0, 00:28:13.949 "r_mbytes_per_sec": 0, 00:28:13.949 "w_mbytes_per_sec": 0 00:28:13.949 }, 00:28:13.949 "claimed": false, 00:28:13.949 "zoned": false, 00:28:13.949 "supported_io_types": { 00:28:13.949 "read": true, 00:28:13.949 "write": true, 00:28:13.949 "unmap": true, 00:28:13.949 "flush": false, 00:28:13.949 "reset": true, 00:28:13.949 "nvme_admin": false, 00:28:13.949 "nvme_io": false, 00:28:13.949 "nvme_io_md": false, 00:28:13.949 "write_zeroes": true, 00:28:13.949 "zcopy": false, 00:28:13.949 "get_zone_info": false, 00:28:13.949 "zone_management": false, 00:28:13.949 "zone_append": false, 00:28:13.949 "compare": false, 00:28:13.949 "compare_and_write": false, 00:28:13.949 "abort": false, 00:28:13.949 "seek_hole": true, 00:28:13.949 "seek_data": true, 00:28:13.949 "copy": false, 00:28:13.949 "nvme_iov_md": false 00:28:13.949 }, 00:28:13.949 "driver_specific": { 00:28:13.949 "lvol": { 00:28:13.949 "lvol_store_uuid": "46e0ed76-29a7-4695-add2-33ba81c8546f", 00:28:13.949 "base_bdev": "nvme0n1", 00:28:13.949 "thin_provision": true, 00:28:13.949 "num_allocated_clusters": 0, 00:28:13.949 "snapshot": false, 00:28:13.949 "clone": false, 00:28:13.949 "esnap_clone": false 00:28:13.949 } 00:28:13.949 } 00:28:13.949 } 00:28:13.949 ]' 00:28:13.949 17:58:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:13.949 17:58:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:13.949 17:58:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:13.949 17:58:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:13.949 17:58:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:13.949 17:58:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:13.949 17:58:03 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:28:13.949 17:58:03 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:28:13.949 17:58:03 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:14.210 17:58:03 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:14.210 17:58:03 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:14.210 17:58:03 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size df6d49fe-9811-4d6b-b929-4743b523df7e 00:28:14.210 17:58:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=df6d49fe-9811-4d6b-b929-4743b523df7e 00:28:14.210 17:58:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:14.210 17:58:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:14.210 17:58:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:14.210 17:58:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b df6d49fe-9811-4d6b-b929-4743b523df7e 00:28:14.471 17:58:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:14.471 { 00:28:14.471 "name": "df6d49fe-9811-4d6b-b929-4743b523df7e", 00:28:14.471 "aliases": [ 00:28:14.471 "lvs/nvme0n1p0" 00:28:14.471 ], 00:28:14.471 "product_name": "Logical Volume", 00:28:14.471 "block_size": 4096, 00:28:14.471 "num_blocks": 26476544, 00:28:14.471 "uuid": "df6d49fe-9811-4d6b-b929-4743b523df7e", 00:28:14.471 "assigned_rate_limits": { 00:28:14.471 "rw_ios_per_sec": 0, 00:28:14.471 "rw_mbytes_per_sec": 0, 00:28:14.471 "r_mbytes_per_sec": 0, 00:28:14.471 "w_mbytes_per_sec": 0 00:28:14.471 }, 00:28:14.471 "claimed": false, 00:28:14.471 "zoned": false, 00:28:14.471 "supported_io_types": { 00:28:14.471 "read": true, 00:28:14.471 "write": true, 00:28:14.471 "unmap": true, 00:28:14.471 "flush": false, 00:28:14.471 "reset": true, 00:28:14.471 "nvme_admin": false, 00:28:14.471 "nvme_io": false, 00:28:14.471 "nvme_io_md": false, 00:28:14.471 "write_zeroes": true, 00:28:14.471 "zcopy": false, 00:28:14.471 "get_zone_info": false, 00:28:14.471 "zone_management": false, 00:28:14.471 "zone_append": false, 00:28:14.471 "compare": false, 00:28:14.471 "compare_and_write": false, 00:28:14.471 "abort": false, 00:28:14.471 "seek_hole": true, 00:28:14.471 "seek_data": true, 00:28:14.471 "copy": false, 00:28:14.471 "nvme_iov_md": false 00:28:14.471 }, 00:28:14.471 "driver_specific": { 00:28:14.471 "lvol": { 00:28:14.471 "lvol_store_uuid": "46e0ed76-29a7-4695-add2-33ba81c8546f", 00:28:14.471 "base_bdev": "nvme0n1", 00:28:14.471 "thin_provision": true, 00:28:14.471 "num_allocated_clusters": 0, 00:28:14.471 "snapshot": false, 00:28:14.471 "clone": false, 00:28:14.471 "esnap_clone": false 00:28:14.471 } 00:28:14.471 } 00:28:14.471 } 00:28:14.471 ]' 00:28:14.472 17:58:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:14.472 17:58:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:14.472 17:58:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:14.472 17:58:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:14.472 17:58:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:14.472 17:58:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:14.472 17:58:04 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:28:14.472 17:58:04 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:14.733 17:58:04 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:28:14.733 17:58:04 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size df6d49fe-9811-4d6b-b929-4743b523df7e 00:28:14.733 17:58:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=df6d49fe-9811-4d6b-b929-4743b523df7e 00:28:14.733 17:58:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:14.733 17:58:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:14.733 17:58:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:14.733 17:58:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b df6d49fe-9811-4d6b-b929-4743b523df7e 00:28:14.995 17:58:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:14.995 { 00:28:14.995 "name": "df6d49fe-9811-4d6b-b929-4743b523df7e", 00:28:14.995 "aliases": [ 00:28:14.995 "lvs/nvme0n1p0" 00:28:14.995 ], 00:28:14.995 "product_name": "Logical Volume", 00:28:14.995 "block_size": 4096, 00:28:14.995 "num_blocks": 26476544, 00:28:14.995 "uuid": "df6d49fe-9811-4d6b-b929-4743b523df7e", 00:28:14.995 "assigned_rate_limits": { 00:28:14.995 "rw_ios_per_sec": 0, 00:28:14.995 "rw_mbytes_per_sec": 0, 00:28:14.995 "r_mbytes_per_sec": 0, 00:28:14.995 "w_mbytes_per_sec": 0 00:28:14.995 }, 00:28:14.995 "claimed": false, 00:28:14.995 "zoned": false, 00:28:14.995 "supported_io_types": { 00:28:14.995 "read": true, 00:28:14.995 "write": true, 00:28:14.995 "unmap": true, 00:28:14.995 "flush": false, 00:28:14.995 "reset": true, 00:28:14.995 "nvme_admin": false, 00:28:14.995 "nvme_io": false, 00:28:14.995 "nvme_io_md": false, 00:28:14.995 "write_zeroes": true, 00:28:14.995 "zcopy": false, 00:28:14.995 "get_zone_info": false, 00:28:14.995 "zone_management": false, 00:28:14.995 "zone_append": false, 00:28:14.995 "compare": false, 00:28:14.995 "compare_and_write": false, 00:28:14.995 "abort": false, 00:28:14.995 "seek_hole": true, 00:28:14.995 "seek_data": true, 00:28:14.995 "copy": false, 00:28:14.995 "nvme_iov_md": false 00:28:14.995 }, 00:28:14.995 "driver_specific": { 00:28:14.995 "lvol": { 00:28:14.995 "lvol_store_uuid": "46e0ed76-29a7-4695-add2-33ba81c8546f", 00:28:14.995 "base_bdev": "nvme0n1", 00:28:14.995 "thin_provision": true, 00:28:14.995 "num_allocated_clusters": 0, 00:28:14.995 "snapshot": false, 00:28:14.995 "clone": false, 00:28:14.995 "esnap_clone": false 00:28:14.995 } 00:28:14.995 } 00:28:14.995 } 00:28:14.995 ]' 00:28:14.995 17:58:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:14.995 17:58:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:14.995 17:58:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:14.995 17:58:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:14.995 17:58:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:14.995 17:58:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:14.995 17:58:04 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:28:14.995 17:58:04 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d df6d49fe-9811-4d6b-b929-4743b523df7e --l2p_dram_limit 10' 00:28:14.995 17:58:04 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:28:14.995 17:58:04 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:28:14.995 17:58:04 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:14.995 17:58:04 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:28:14.995 17:58:04 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:28:14.995 17:58:04 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d df6d49fe-9811-4d6b-b929-4743b523df7e --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:28:14.995 [2024-10-13 17:58:04.807911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.995 [2024-10-13 17:58:04.807958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:14.995 [2024-10-13 17:58:04.807973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:14.995 [2024-10-13 17:58:04.807980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.995 [2024-10-13 17:58:04.808042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.995 [2024-10-13 17:58:04.808051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:14.995 [2024-10-13 17:58:04.808061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:28:14.995 [2024-10-13 17:58:04.808072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.258 [2024-10-13 17:58:04.808094] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:15.258 [2024-10-13 17:58:04.808793] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:15.258 [2024-10-13 17:58:04.808824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.258 [2024-10-13 17:58:04.808831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:15.258 [2024-10-13 17:58:04.808841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.735 ms 00:28:15.258 [2024-10-13 17:58:04.808848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.258 [2024-10-13 17:58:04.809002] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9e25fb9a-cdcc-44c5-8f80-87d7525e4ba6 00:28:15.258 [2024-10-13 17:58:04.810344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.258 [2024-10-13 17:58:04.810379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:15.258 [2024-10-13 17:58:04.810388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:28:15.258 [2024-10-13 17:58:04.810399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.258 [2024-10-13 17:58:04.817353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.258 [2024-10-13 17:58:04.817385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:15.258 [2024-10-13 17:58:04.817393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.917 ms 00:28:15.258 [2024-10-13 17:58:04.817401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.258 [2024-10-13 17:58:04.817478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.258 [2024-10-13 17:58:04.817488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:15.258 [2024-10-13 17:58:04.817495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:28:15.258 [2024-10-13 17:58:04.817505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.258 [2024-10-13 17:58:04.817538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.258 [2024-10-13 17:58:04.817547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:15.258 [2024-10-13 17:58:04.817554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:15.258 [2024-10-13 17:58:04.817574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.258 [2024-10-13 17:58:04.817594] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:15.258 [2024-10-13 17:58:04.820948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.258 [2024-10-13 17:58:04.820977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:15.258 [2024-10-13 17:58:04.820987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.359 ms 00:28:15.258 [2024-10-13 17:58:04.820998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.258 [2024-10-13 17:58:04.821029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.258 [2024-10-13 17:58:04.821036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:15.258 [2024-10-13 17:58:04.821044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:15.258 [2024-10-13 17:58:04.821050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.258 [2024-10-13 17:58:04.821075] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:15.258 [2024-10-13 17:58:04.821190] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:15.258 [2024-10-13 17:58:04.821210] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:15.258 [2024-10-13 17:58:04.821220] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:15.258 [2024-10-13 17:58:04.821229] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:15.258 [2024-10-13 17:58:04.821237] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:15.258 [2024-10-13 17:58:04.821245] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:15.258 [2024-10-13 17:58:04.821251] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:15.258 [2024-10-13 17:58:04.821259] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:15.258 [2024-10-13 17:58:04.821265] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:15.258 [2024-10-13 17:58:04.821272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.258 [2024-10-13 17:58:04.821280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:15.258 [2024-10-13 17:58:04.821288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:28:15.258 [2024-10-13 17:58:04.821300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.258 [2024-10-13 17:58:04.821366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.258 [2024-10-13 17:58:04.821373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:15.258 [2024-10-13 17:58:04.821381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:28:15.258 [2024-10-13 17:58:04.821387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.258 [2024-10-13 17:58:04.821464] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:15.258 [2024-10-13 17:58:04.821479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:15.258 [2024-10-13 17:58:04.821489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:15.258 [2024-10-13 17:58:04.821495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:15.258 [2024-10-13 17:58:04.821504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:15.258 [2024-10-13 17:58:04.821510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:15.258 [2024-10-13 17:58:04.821517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:15.258 [2024-10-13 17:58:04.821522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:15.258 [2024-10-13 17:58:04.821530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:15.258 [2024-10-13 17:58:04.821535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:15.258 [2024-10-13 17:58:04.821542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:15.258 [2024-10-13 17:58:04.821549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:15.258 [2024-10-13 17:58:04.821565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:15.258 [2024-10-13 17:58:04.821571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:15.258 [2024-10-13 17:58:04.821578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:15.258 [2024-10-13 17:58:04.821584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:15.258 [2024-10-13 17:58:04.821593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:15.258 [2024-10-13 17:58:04.821598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:15.258 [2024-10-13 17:58:04.821605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:15.258 [2024-10-13 17:58:04.821610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:15.258 [2024-10-13 17:58:04.821618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:15.258 [2024-10-13 17:58:04.821623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:15.258 [2024-10-13 17:58:04.821630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:15.258 [2024-10-13 17:58:04.821635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:15.258 [2024-10-13 17:58:04.821642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:15.258 [2024-10-13 17:58:04.821647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:15.258 [2024-10-13 17:58:04.821653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:15.258 [2024-10-13 17:58:04.821658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:15.258 [2024-10-13 17:58:04.821665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:15.258 [2024-10-13 17:58:04.821671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:15.258 [2024-10-13 17:58:04.821677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:15.258 [2024-10-13 17:58:04.821682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:15.258 [2024-10-13 17:58:04.821690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:15.258 [2024-10-13 17:58:04.821695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:15.258 [2024-10-13 17:58:04.821702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:15.258 [2024-10-13 17:58:04.821707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:15.258 [2024-10-13 17:58:04.821713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:15.258 [2024-10-13 17:58:04.821719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:15.258 [2024-10-13 17:58:04.821726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:15.258 [2024-10-13 17:58:04.821731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:15.259 [2024-10-13 17:58:04.821737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:15.259 [2024-10-13 17:58:04.821742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:15.259 [2024-10-13 17:58:04.821749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:15.259 [2024-10-13 17:58:04.821756] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:15.259 [2024-10-13 17:58:04.821764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:15.259 [2024-10-13 17:58:04.821770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:15.259 [2024-10-13 17:58:04.821779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:15.259 [2024-10-13 17:58:04.821785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:15.259 [2024-10-13 17:58:04.821794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:15.259 [2024-10-13 17:58:04.821799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:15.259 [2024-10-13 17:58:04.821805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:15.259 [2024-10-13 17:58:04.821811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:15.259 [2024-10-13 17:58:04.821818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:15.259 [2024-10-13 17:58:04.821827] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:15.259 [2024-10-13 17:58:04.821836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:15.259 [2024-10-13 17:58:04.821843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:15.259 [2024-10-13 17:58:04.821850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:15.259 [2024-10-13 17:58:04.821856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:15.259 [2024-10-13 17:58:04.821863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:15.259 [2024-10-13 17:58:04.821869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:15.259 [2024-10-13 17:58:04.821875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:15.259 [2024-10-13 17:58:04.821882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:15.259 [2024-10-13 17:58:04.821889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:15.259 [2024-10-13 17:58:04.821894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:15.259 [2024-10-13 17:58:04.821902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:15.259 [2024-10-13 17:58:04.821907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:15.259 [2024-10-13 17:58:04.821914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:15.259 [2024-10-13 17:58:04.821920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:15.259 [2024-10-13 17:58:04.821927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:15.259 [2024-10-13 17:58:04.821932] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:15.259 [2024-10-13 17:58:04.821940] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:15.259 [2024-10-13 17:58:04.821948] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:15.259 [2024-10-13 17:58:04.821955] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:15.259 [2024-10-13 17:58:04.821961] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:15.259 [2024-10-13 17:58:04.821968] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:15.259 [2024-10-13 17:58:04.821975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.259 [2024-10-13 17:58:04.821983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:15.259 [2024-10-13 17:58:04.821989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:28:15.259 [2024-10-13 17:58:04.821996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.259 [2024-10-13 17:58:04.822041] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:15.259 [2024-10-13 17:58:04.822053] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:19.509 [2024-10-13 17:58:08.807983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.509 [2024-10-13 17:58:08.808106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:19.509 [2024-10-13 17:58:08.808129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3985.924 ms 00:28:19.509 [2024-10-13 17:58:08.808141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.509 [2024-10-13 17:58:08.846082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.509 [2024-10-13 17:58:08.846163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:19.510 [2024-10-13 17:58:08.846181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.666 ms 00:28:19.510 [2024-10-13 17:58:08.846193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.510 [2024-10-13 17:58:08.846354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.510 [2024-10-13 17:58:08.846370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:19.510 [2024-10-13 17:58:08.846381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:28:19.510 [2024-10-13 17:58:08.846396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.510 [2024-10-13 17:58:08.886990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.510 [2024-10-13 17:58:08.887057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:19.510 [2024-10-13 17:58:08.887073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.540 ms 00:28:19.510 [2024-10-13 17:58:08.887085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.510 [2024-10-13 17:58:08.887128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.510 [2024-10-13 17:58:08.887143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:19.510 [2024-10-13 17:58:08.887153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:19.510 [2024-10-13 17:58:08.887168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.510 [2024-10-13 17:58:08.887954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.510 [2024-10-13 17:58:08.888005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:19.510 [2024-10-13 17:58:08.888018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.726 ms 00:28:19.510 [2024-10-13 17:58:08.888031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.510 [2024-10-13 17:58:08.888162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.510 [2024-10-13 17:58:08.888178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:19.510 [2024-10-13 17:58:08.888189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:28:19.510 [2024-10-13 17:58:08.888204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.510 [2024-10-13 17:58:08.908702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.510 [2024-10-13 17:58:08.908757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:19.510 [2024-10-13 17:58:08.908769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.473 ms 00:28:19.510 [2024-10-13 17:58:08.908785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.510 [2024-10-13 17:58:08.924634] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:19.510 [2024-10-13 17:58:08.929720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.510 [2024-10-13 17:58:08.929765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:19.510 [2024-10-13 17:58:08.929779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.831 ms 00:28:19.510 [2024-10-13 17:58:08.929790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.510 [2024-10-13 17:58:09.046134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.510 [2024-10-13 17:58:09.046198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:19.510 [2024-10-13 17:58:09.046218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 116.301 ms 00:28:19.510 [2024-10-13 17:58:09.046229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.510 [2024-10-13 17:58:09.046462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.510 [2024-10-13 17:58:09.046784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:19.510 [2024-10-13 17:58:09.046817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:28:19.510 [2024-10-13 17:58:09.046832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.510 [2024-10-13 17:58:09.074329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.510 [2024-10-13 17:58:09.074388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:19.510 [2024-10-13 17:58:09.074405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.417 ms 00:28:19.510 [2024-10-13 17:58:09.074415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.510 [2024-10-13 17:58:09.100029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.510 [2024-10-13 17:58:09.100082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:19.510 [2024-10-13 17:58:09.100098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.554 ms 00:28:19.510 [2024-10-13 17:58:09.100106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.510 [2024-10-13 17:58:09.100765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.510 [2024-10-13 17:58:09.100792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:19.510 [2024-10-13 17:58:09.100805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.608 ms 00:28:19.510 [2024-10-13 17:58:09.100814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.510 [2024-10-13 17:58:09.193117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.510 [2024-10-13 17:58:09.193176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:19.510 [2024-10-13 17:58:09.193197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.240 ms 00:28:19.510 [2024-10-13 17:58:09.193206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.510 [2024-10-13 17:58:09.221934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.510 [2024-10-13 17:58:09.222003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:19.510 [2024-10-13 17:58:09.222023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.625 ms 00:28:19.510 [2024-10-13 17:58:09.222032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.510 [2024-10-13 17:58:09.248037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.510 [2024-10-13 17:58:09.248092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:19.510 [2024-10-13 17:58:09.248107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.946 ms 00:28:19.510 [2024-10-13 17:58:09.248115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.510 [2024-10-13 17:58:09.274782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.510 [2024-10-13 17:58:09.274837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:19.510 [2024-10-13 17:58:09.274852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.609 ms 00:28:19.510 [2024-10-13 17:58:09.274859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.510 [2024-10-13 17:58:09.274920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.510 [2024-10-13 17:58:09.274930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:19.510 [2024-10-13 17:58:09.274946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:19.510 [2024-10-13 17:58:09.274955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.510 [2024-10-13 17:58:09.275057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.510 [2024-10-13 17:58:09.275069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:19.510 [2024-10-13 17:58:09.275081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:28:19.510 [2024-10-13 17:58:09.275089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.510 [2024-10-13 17:58:09.276698] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4468.158 ms, result 0 00:28:19.510 { 00:28:19.510 "name": "ftl0", 00:28:19.510 "uuid": "9e25fb9a-cdcc-44c5-8f80-87d7525e4ba6" 00:28:19.510 } 00:28:19.510 17:58:09 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:28:19.510 17:58:09 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:28:19.771 17:58:09 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:28:19.771 17:58:09 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:20.033 [2024-10-13 17:58:09.723700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.033 [2024-10-13 17:58:09.723772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:20.033 [2024-10-13 17:58:09.723789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:20.033 [2024-10-13 17:58:09.723814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.033 [2024-10-13 17:58:09.723843] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:20.033 [2024-10-13 17:58:09.727173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.033 [2024-10-13 17:58:09.727219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:20.033 [2024-10-13 17:58:09.727238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.304 ms 00:28:20.033 [2024-10-13 17:58:09.727248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.033 [2024-10-13 17:58:09.727594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.033 [2024-10-13 17:58:09.727609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:20.033 [2024-10-13 17:58:09.727622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:28:20.033 [2024-10-13 17:58:09.727632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.033 [2024-10-13 17:58:09.730880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.033 [2024-10-13 17:58:09.730906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:20.033 [2024-10-13 17:58:09.730919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.226 ms 00:28:20.033 [2024-10-13 17:58:09.730929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.033 [2024-10-13 17:58:09.737203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.033 [2024-10-13 17:58:09.737251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:20.033 [2024-10-13 17:58:09.737266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.249 ms 00:28:20.033 [2024-10-13 17:58:09.737275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.033 [2024-10-13 17:58:09.765041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.033 [2024-10-13 17:58:09.765096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:20.034 [2024-10-13 17:58:09.765113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.683 ms 00:28:20.034 [2024-10-13 17:58:09.765121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.034 [2024-10-13 17:58:09.784806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.034 [2024-10-13 17:58:09.784862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:20.034 [2024-10-13 17:58:09.784882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.622 ms 00:28:20.034 [2024-10-13 17:58:09.784890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.034 [2024-10-13 17:58:09.785069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.034 [2024-10-13 17:58:09.785083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:20.034 [2024-10-13 17:58:09.785096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:28:20.034 [2024-10-13 17:58:09.785105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.034 [2024-10-13 17:58:09.811517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.034 [2024-10-13 17:58:09.811578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:20.034 [2024-10-13 17:58:09.811601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.389 ms 00:28:20.034 [2024-10-13 17:58:09.811609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.034 [2024-10-13 17:58:09.837455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.034 [2024-10-13 17:58:09.837508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:20.034 [2024-10-13 17:58:09.837523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.791 ms 00:28:20.034 [2024-10-13 17:58:09.837531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.296 [2024-10-13 17:58:09.862916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.296 [2024-10-13 17:58:09.862968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:20.296 [2024-10-13 17:58:09.862982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.318 ms 00:28:20.296 [2024-10-13 17:58:09.862990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.296 [2024-10-13 17:58:09.888297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.296 [2024-10-13 17:58:09.888348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:20.296 [2024-10-13 17:58:09.888362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.169 ms 00:28:20.296 [2024-10-13 17:58:09.888370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.296 [2024-10-13 17:58:09.888420] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:20.296 [2024-10-13 17:58:09.888438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:20.296 [2024-10-13 17:58:09.888451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.888998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.889006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.889016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.889023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.889038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.889047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.889058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.889066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.889077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.889085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.889096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.889104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.889114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.889124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.889138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.889146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.889157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.889165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.889175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.889184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.889193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.889201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.889211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.889219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.889231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.889238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.889248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.889255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.889265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.889274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.889285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.889293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.889302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.889309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.889319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.889327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.889340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:20.297 [2024-10-13 17:58:09.889352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:20.298 [2024-10-13 17:58:09.889362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:20.298 [2024-10-13 17:58:09.889370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:20.298 [2024-10-13 17:58:09.889380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:20.298 [2024-10-13 17:58:09.889389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:20.298 [2024-10-13 17:58:09.889400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:20.298 [2024-10-13 17:58:09.889417] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:20.298 [2024-10-13 17:58:09.889428] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9e25fb9a-cdcc-44c5-8f80-87d7525e4ba6 00:28:20.298 [2024-10-13 17:58:09.889436] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:20.298 [2024-10-13 17:58:09.889449] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:20.298 [2024-10-13 17:58:09.889460] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:20.298 [2024-10-13 17:58:09.889472] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:20.298 [2024-10-13 17:58:09.889479] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:20.298 [2024-10-13 17:58:09.889494] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:20.298 [2024-10-13 17:58:09.889502] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:20.298 [2024-10-13 17:58:09.889512] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:20.298 [2024-10-13 17:58:09.889519] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:20.298 [2024-10-13 17:58:09.889529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.298 [2024-10-13 17:58:09.889537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:20.298 [2024-10-13 17:58:09.889549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.110 ms 00:28:20.298 [2024-10-13 17:58:09.889569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.298 [2024-10-13 17:58:09.904292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.298 [2024-10-13 17:58:09.904341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:20.298 [2024-10-13 17:58:09.904355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.664 ms 00:28:20.298 [2024-10-13 17:58:09.904363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.298 [2024-10-13 17:58:09.904808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.298 [2024-10-13 17:58:09.904831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:20.298 [2024-10-13 17:58:09.904844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:28:20.298 [2024-10-13 17:58:09.904852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.298 [2024-10-13 17:58:09.955645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:20.298 [2024-10-13 17:58:09.955698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:20.298 [2024-10-13 17:58:09.955715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:20.298 [2024-10-13 17:58:09.955725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.298 [2024-10-13 17:58:09.955806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:20.298 [2024-10-13 17:58:09.955816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:20.298 [2024-10-13 17:58:09.955828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:20.298 [2024-10-13 17:58:09.955836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.298 [2024-10-13 17:58:09.955930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:20.298 [2024-10-13 17:58:09.955942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:20.298 [2024-10-13 17:58:09.955954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:20.298 [2024-10-13 17:58:09.955963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.298 [2024-10-13 17:58:09.955988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:20.298 [2024-10-13 17:58:09.955997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:20.298 [2024-10-13 17:58:09.956007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:20.298 [2024-10-13 17:58:09.956014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.298 [2024-10-13 17:58:10.048721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:20.298 [2024-10-13 17:58:10.048810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:20.298 [2024-10-13 17:58:10.048829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:20.298 [2024-10-13 17:58:10.048838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.560 [2024-10-13 17:58:10.125931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:20.560 [2024-10-13 17:58:10.126008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:20.560 [2024-10-13 17:58:10.126027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:20.560 [2024-10-13 17:58:10.126036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.560 [2024-10-13 17:58:10.126200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:20.560 [2024-10-13 17:58:10.126216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:20.560 [2024-10-13 17:58:10.126230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:20.560 [2024-10-13 17:58:10.126238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.560 [2024-10-13 17:58:10.126299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:20.560 [2024-10-13 17:58:10.126309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:20.560 [2024-10-13 17:58:10.126320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:20.560 [2024-10-13 17:58:10.126329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.560 [2024-10-13 17:58:10.126444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:20.560 [2024-10-13 17:58:10.126455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:20.560 [2024-10-13 17:58:10.126471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:20.560 [2024-10-13 17:58:10.126480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.560 [2024-10-13 17:58:10.126521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:20.560 [2024-10-13 17:58:10.126532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:20.560 [2024-10-13 17:58:10.126543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:20.560 [2024-10-13 17:58:10.126552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.560 [2024-10-13 17:58:10.126625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:20.560 [2024-10-13 17:58:10.126638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:20.560 [2024-10-13 17:58:10.126649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:20.560 [2024-10-13 17:58:10.126661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.560 [2024-10-13 17:58:10.126734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:20.560 [2024-10-13 17:58:10.126755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:20.560 [2024-10-13 17:58:10.126767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:20.560 [2024-10-13 17:58:10.126775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.560 [2024-10-13 17:58:10.126965] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 403.203 ms, result 0 00:28:20.560 true 00:28:20.560 17:58:10 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 81878 00:28:20.560 17:58:10 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 81878 ']' 00:28:20.560 17:58:10 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 81878 00:28:20.560 17:58:10 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # uname 00:28:20.560 17:58:10 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:20.560 17:58:10 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81878 00:28:20.560 17:58:10 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:20.560 killing process with pid 81878 00:28:20.560 17:58:10 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:20.560 17:58:10 ftl.ftl_restore_fast -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81878' 00:28:20.560 17:58:10 ftl.ftl_restore_fast -- common/autotest_common.sh@969 -- # kill 81878 00:28:20.560 17:58:10 ftl.ftl_restore_fast -- common/autotest_common.sh@974 -- # wait 81878 00:28:27.150 17:58:15 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:28:30.453 262144+0 records in 00:28:30.453 262144+0 records out 00:28:30.453 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.83414 s, 280 MB/s 00:28:30.453 17:58:19 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:32.368 17:58:21 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:32.368 [2024-10-13 17:58:21.924826] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:28:32.368 [2024-10-13 17:58:21.924960] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82105 ] 00:28:32.368 [2024-10-13 17:58:22.076731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.629 [2024-10-13 17:58:22.227369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.890 [2024-10-13 17:58:22.557870] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:32.890 [2024-10-13 17:58:22.557969] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:33.152 [2024-10-13 17:58:22.722928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.152 [2024-10-13 17:58:22.722994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:33.152 [2024-10-13 17:58:22.723011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:33.152 [2024-10-13 17:58:22.723026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.152 [2024-10-13 17:58:22.723087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.152 [2024-10-13 17:58:22.723098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:33.152 [2024-10-13 17:58:22.723108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:28:33.152 [2024-10-13 17:58:22.723119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.152 [2024-10-13 17:58:22.723142] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:33.152 [2024-10-13 17:58:22.724246] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:33.152 [2024-10-13 17:58:22.724313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.152 [2024-10-13 17:58:22.724330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:33.152 [2024-10-13 17:58:22.724342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.177 ms 00:28:33.152 [2024-10-13 17:58:22.724350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.152 [2024-10-13 17:58:22.726691] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:33.152 [2024-10-13 17:58:22.742829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.152 [2024-10-13 17:58:22.742880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:33.152 [2024-10-13 17:58:22.742895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.140 ms 00:28:33.152 [2024-10-13 17:58:22.742905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.152 [2024-10-13 17:58:22.742996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.152 [2024-10-13 17:58:22.743008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:33.152 [2024-10-13 17:58:22.743021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:28:33.152 [2024-10-13 17:58:22.743030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.152 [2024-10-13 17:58:22.754856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.152 [2024-10-13 17:58:22.754906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:33.152 [2024-10-13 17:58:22.754920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.739 ms 00:28:33.153 [2024-10-13 17:58:22.754929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.153 [2024-10-13 17:58:22.755024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.153 [2024-10-13 17:58:22.755034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:33.153 [2024-10-13 17:58:22.755043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:28:33.153 [2024-10-13 17:58:22.755053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.153 [2024-10-13 17:58:22.755115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.153 [2024-10-13 17:58:22.755125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:33.153 [2024-10-13 17:58:22.755134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:33.153 [2024-10-13 17:58:22.755142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.153 [2024-10-13 17:58:22.755166] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:33.153 [2024-10-13 17:58:22.759869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.153 [2024-10-13 17:58:22.759916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:33.153 [2024-10-13 17:58:22.759927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.710 ms 00:28:33.153 [2024-10-13 17:58:22.759936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.153 [2024-10-13 17:58:22.759981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.153 [2024-10-13 17:58:22.759990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:33.153 [2024-10-13 17:58:22.759999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:28:33.153 [2024-10-13 17:58:22.760008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.153 [2024-10-13 17:58:22.760048] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:33.153 [2024-10-13 17:58:22.760077] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:33.153 [2024-10-13 17:58:22.760120] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:33.153 [2024-10-13 17:58:22.760142] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:33.153 [2024-10-13 17:58:22.760254] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:33.153 [2024-10-13 17:58:22.760267] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:33.153 [2024-10-13 17:58:22.760278] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:33.153 [2024-10-13 17:58:22.760289] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:33.153 [2024-10-13 17:58:22.760299] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:33.153 [2024-10-13 17:58:22.760309] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:33.153 [2024-10-13 17:58:22.760318] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:33.153 [2024-10-13 17:58:22.760327] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:33.153 [2024-10-13 17:58:22.760336] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:33.153 [2024-10-13 17:58:22.760346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.153 [2024-10-13 17:58:22.760359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:33.153 [2024-10-13 17:58:22.760369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:28:33.153 [2024-10-13 17:58:22.760377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.153 [2024-10-13 17:58:22.760461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.153 [2024-10-13 17:58:22.760470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:33.153 [2024-10-13 17:58:22.760479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:33.153 [2024-10-13 17:58:22.760487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.153 [2024-10-13 17:58:22.760614] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:33.153 [2024-10-13 17:58:22.760627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:33.153 [2024-10-13 17:58:22.760639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:33.153 [2024-10-13 17:58:22.760648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:33.153 [2024-10-13 17:58:22.760656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:33.153 [2024-10-13 17:58:22.760663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:33.153 [2024-10-13 17:58:22.760671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:33.153 [2024-10-13 17:58:22.760678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:33.153 [2024-10-13 17:58:22.760685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:33.153 [2024-10-13 17:58:22.760692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:33.153 [2024-10-13 17:58:22.760699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:33.153 [2024-10-13 17:58:22.760705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:33.153 [2024-10-13 17:58:22.760712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:33.153 [2024-10-13 17:58:22.760730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:33.153 [2024-10-13 17:58:22.760739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:33.153 [2024-10-13 17:58:22.760754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:33.153 [2024-10-13 17:58:22.760761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:33.153 [2024-10-13 17:58:22.760769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:33.153 [2024-10-13 17:58:22.760776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:33.153 [2024-10-13 17:58:22.760783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:33.153 [2024-10-13 17:58:22.760791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:33.153 [2024-10-13 17:58:22.760798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:33.153 [2024-10-13 17:58:22.760807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:33.153 [2024-10-13 17:58:22.760814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:33.153 [2024-10-13 17:58:22.760822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:33.153 [2024-10-13 17:58:22.760829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:33.153 [2024-10-13 17:58:22.760836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:33.153 [2024-10-13 17:58:22.760842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:33.153 [2024-10-13 17:58:22.760850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:33.153 [2024-10-13 17:58:22.760858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:33.153 [2024-10-13 17:58:22.760866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:33.153 [2024-10-13 17:58:22.760874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:33.153 [2024-10-13 17:58:22.760881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:33.153 [2024-10-13 17:58:22.760887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:33.153 [2024-10-13 17:58:22.760894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:33.153 [2024-10-13 17:58:22.760901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:33.153 [2024-10-13 17:58:22.760908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:33.153 [2024-10-13 17:58:22.760915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:33.153 [2024-10-13 17:58:22.760922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:33.153 [2024-10-13 17:58:22.760929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:33.153 [2024-10-13 17:58:22.760936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:33.153 [2024-10-13 17:58:22.760943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:33.153 [2024-10-13 17:58:22.760949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:33.153 [2024-10-13 17:58:22.760956] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:33.153 [2024-10-13 17:58:22.760964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:33.153 [2024-10-13 17:58:22.760975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:33.153 [2024-10-13 17:58:22.760984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:33.153 [2024-10-13 17:58:22.760992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:33.153 [2024-10-13 17:58:22.760999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:33.153 [2024-10-13 17:58:22.761007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:33.153 [2024-10-13 17:58:22.761014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:33.153 [2024-10-13 17:58:22.761022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:33.153 [2024-10-13 17:58:22.761030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:33.153 [2024-10-13 17:58:22.761039] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:33.153 [2024-10-13 17:58:22.761048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:33.153 [2024-10-13 17:58:22.761057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:33.153 [2024-10-13 17:58:22.761065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:33.153 [2024-10-13 17:58:22.761073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:33.153 [2024-10-13 17:58:22.761080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:33.153 [2024-10-13 17:58:22.761088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:33.153 [2024-10-13 17:58:22.761095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:33.153 [2024-10-13 17:58:22.761103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:33.153 [2024-10-13 17:58:22.761110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:33.153 [2024-10-13 17:58:22.761118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:33.153 [2024-10-13 17:58:22.761125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:33.153 [2024-10-13 17:58:22.761132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:33.153 [2024-10-13 17:58:22.761139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:33.154 [2024-10-13 17:58:22.761146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:33.154 [2024-10-13 17:58:22.761153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:33.154 [2024-10-13 17:58:22.761161] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:33.154 [2024-10-13 17:58:22.761171] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:33.154 [2024-10-13 17:58:22.761182] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:33.154 [2024-10-13 17:58:22.761189] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:33.154 [2024-10-13 17:58:22.761196] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:33.154 [2024-10-13 17:58:22.761203] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:33.154 [2024-10-13 17:58:22.761210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.154 [2024-10-13 17:58:22.761218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:33.154 [2024-10-13 17:58:22.761228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.686 ms 00:28:33.154 [2024-10-13 17:58:22.761237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.154 [2024-10-13 17:58:22.799848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.154 [2024-10-13 17:58:22.799899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:33.154 [2024-10-13 17:58:22.799913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.562 ms 00:28:33.154 [2024-10-13 17:58:22.799922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.154 [2024-10-13 17:58:22.800021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.154 [2024-10-13 17:58:22.800036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:33.154 [2024-10-13 17:58:22.800045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:28:33.154 [2024-10-13 17:58:22.800054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.154 [2024-10-13 17:58:22.851514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.154 [2024-10-13 17:58:22.851599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:33.154 [2024-10-13 17:58:22.851614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.395 ms 00:28:33.154 [2024-10-13 17:58:22.851625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.154 [2024-10-13 17:58:22.851682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.154 [2024-10-13 17:58:22.851693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:33.154 [2024-10-13 17:58:22.851703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:33.154 [2024-10-13 17:58:22.851712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.154 [2024-10-13 17:58:22.852474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.154 [2024-10-13 17:58:22.852521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:33.154 [2024-10-13 17:58:22.852533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.671 ms 00:28:33.154 [2024-10-13 17:58:22.852543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.154 [2024-10-13 17:58:22.852749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.154 [2024-10-13 17:58:22.852762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:33.154 [2024-10-13 17:58:22.852772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:28:33.154 [2024-10-13 17:58:22.852781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.154 [2024-10-13 17:58:22.871230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.154 [2024-10-13 17:58:22.871282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:33.154 [2024-10-13 17:58:22.871294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.423 ms 00:28:33.154 [2024-10-13 17:58:22.871306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.154 [2024-10-13 17:58:22.887017] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:33.154 [2024-10-13 17:58:22.887075] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:33.154 [2024-10-13 17:58:22.887091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.154 [2024-10-13 17:58:22.887100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:33.154 [2024-10-13 17:58:22.887111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.659 ms 00:28:33.154 [2024-10-13 17:58:22.887120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.154 [2024-10-13 17:58:22.914197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.154 [2024-10-13 17:58:22.914250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:33.154 [2024-10-13 17:58:22.914263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.016 ms 00:28:33.154 [2024-10-13 17:58:22.914280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.154 [2024-10-13 17:58:22.927880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.154 [2024-10-13 17:58:22.927942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:33.154 [2024-10-13 17:58:22.927954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.538 ms 00:28:33.154 [2024-10-13 17:58:22.927962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.154 [2024-10-13 17:58:22.941032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.154 [2024-10-13 17:58:22.941082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:33.154 [2024-10-13 17:58:22.941095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.020 ms 00:28:33.154 [2024-10-13 17:58:22.941102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.154 [2024-10-13 17:58:22.941783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.154 [2024-10-13 17:58:22.941818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:33.154 [2024-10-13 17:58:22.941830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:28:33.154 [2024-10-13 17:58:22.941839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.415 [2024-10-13 17:58:23.015847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.415 [2024-10-13 17:58:23.015909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:33.415 [2024-10-13 17:58:23.015925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.988 ms 00:28:33.415 [2024-10-13 17:58:23.015934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.415 [2024-10-13 17:58:23.028304] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:33.415 [2024-10-13 17:58:23.032051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.415 [2024-10-13 17:58:23.032096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:33.415 [2024-10-13 17:58:23.032108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.048 ms 00:28:33.416 [2024-10-13 17:58:23.032117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.416 [2024-10-13 17:58:23.032212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.416 [2024-10-13 17:58:23.032225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:33.416 [2024-10-13 17:58:23.032235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:28:33.416 [2024-10-13 17:58:23.032243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.416 [2024-10-13 17:58:23.032327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.416 [2024-10-13 17:58:23.032342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:33.416 [2024-10-13 17:58:23.032352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:28:33.416 [2024-10-13 17:58:23.032361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.416 [2024-10-13 17:58:23.032385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.416 [2024-10-13 17:58:23.032394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:33.416 [2024-10-13 17:58:23.032404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:33.416 [2024-10-13 17:58:23.032413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.416 [2024-10-13 17:58:23.032456] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:33.416 [2024-10-13 17:58:23.032467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.416 [2024-10-13 17:58:23.032476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:33.416 [2024-10-13 17:58:23.032490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:33.416 [2024-10-13 17:58:23.032500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.416 [2024-10-13 17:58:23.059601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.416 [2024-10-13 17:58:23.059648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:33.416 [2024-10-13 17:58:23.059663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.080 ms 00:28:33.416 [2024-10-13 17:58:23.059672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.416 [2024-10-13 17:58:23.059773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.416 [2024-10-13 17:58:23.059787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:33.416 [2024-10-13 17:58:23.059798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:28:33.416 [2024-10-13 17:58:23.059807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.416 [2024-10-13 17:58:23.061323] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 337.825 ms, result 0 00:28:34.358  [2024-10-13T17:58:25.115Z] Copying: 13/1024 [MB] (13 MBps) [2024-10-13T17:58:26.503Z] Copying: 28/1024 [MB] (15 MBps) [2024-10-13T17:58:27.076Z] Copying: 45/1024 [MB] (16 MBps) [2024-10-13T17:58:28.464Z] Copying: 59/1024 [MB] (14 MBps) [2024-10-13T17:58:29.082Z] Copying: 75/1024 [MB] (16 MBps) [2024-10-13T17:58:30.470Z] Copying: 96/1024 [MB] (21 MBps) [2024-10-13T17:58:31.413Z] Copying: 111/1024 [MB] (14 MBps) [2024-10-13T17:58:32.357Z] Copying: 131/1024 [MB] (19 MBps) [2024-10-13T17:58:33.302Z] Copying: 151/1024 [MB] (20 MBps) [2024-10-13T17:58:34.245Z] Copying: 166/1024 [MB] (14 MBps) [2024-10-13T17:58:35.186Z] Copying: 191/1024 [MB] (25 MBps) [2024-10-13T17:58:36.131Z] Copying: 216/1024 [MB] (25 MBps) [2024-10-13T17:58:37.075Z] Copying: 234/1024 [MB] (18 MBps) [2024-10-13T17:58:38.462Z] Copying: 253/1024 [MB] (18 MBps) [2024-10-13T17:58:39.405Z] Copying: 271/1024 [MB] (18 MBps) [2024-10-13T17:58:40.350Z] Copying: 284/1024 [MB] (13 MBps) [2024-10-13T17:58:41.295Z] Copying: 297/1024 [MB] (12 MBps) [2024-10-13T17:58:42.239Z] Copying: 311/1024 [MB] (14 MBps) [2024-10-13T17:58:43.183Z] Copying: 332/1024 [MB] (20 MBps) [2024-10-13T17:58:44.126Z] Copying: 346/1024 [MB] (14 MBps) [2024-10-13T17:58:45.511Z] Copying: 360/1024 [MB] (14 MBps) [2024-10-13T17:58:46.082Z] Copying: 377/1024 [MB] (16 MBps) [2024-10-13T17:58:47.493Z] Copying: 402/1024 [MB] (25 MBps) [2024-10-13T17:58:48.084Z] Copying: 421/1024 [MB] (18 MBps) [2024-10-13T17:58:49.469Z] Copying: 433/1024 [MB] (12 MBps) [2024-10-13T17:58:50.411Z] Copying: 447/1024 [MB] (13 MBps) [2024-10-13T17:58:51.354Z] Copying: 465/1024 [MB] (18 MBps) [2024-10-13T17:58:52.298Z] Copying: 491/1024 [MB] (25 MBps) [2024-10-13T17:58:53.240Z] Copying: 503/1024 [MB] (12 MBps) [2024-10-13T17:58:54.183Z] Copying: 525/1024 [MB] (22 MBps) [2024-10-13T17:58:55.126Z] Copying: 537/1024 [MB] (11 MBps) [2024-10-13T17:58:56.514Z] Copying: 558/1024 [MB] (20 MBps) [2024-10-13T17:58:57.087Z] Copying: 571/1024 [MB] (13 MBps) [2024-10-13T17:58:58.475Z] Copying: 584/1024 [MB] (13 MBps) [2024-10-13T17:58:59.419Z] Copying: 604/1024 [MB] (19 MBps) [2024-10-13T17:59:00.364Z] Copying: 623/1024 [MB] (18 MBps) [2024-10-13T17:59:01.308Z] Copying: 640/1024 [MB] (16 MBps) [2024-10-13T17:59:02.254Z] Copying: 657/1024 [MB] (17 MBps) [2024-10-13T17:59:03.198Z] Copying: 672/1024 [MB] (15 MBps) [2024-10-13T17:59:04.178Z] Copying: 684/1024 [MB] (11 MBps) [2024-10-13T17:59:05.122Z] Copying: 696/1024 [MB] (12 MBps) [2024-10-13T17:59:06.508Z] Copying: 711/1024 [MB] (14 MBps) [2024-10-13T17:59:07.081Z] Copying: 724/1024 [MB] (13 MBps) [2024-10-13T17:59:08.469Z] Copying: 744/1024 [MB] (20 MBps) [2024-10-13T17:59:09.412Z] Copying: 766/1024 [MB] (21 MBps) [2024-10-13T17:59:10.355Z] Copying: 779/1024 [MB] (12 MBps) [2024-10-13T17:59:11.298Z] Copying: 802/1024 [MB] (23 MBps) [2024-10-13T17:59:12.242Z] Copying: 817/1024 [MB] (14 MBps) [2024-10-13T17:59:13.187Z] Copying: 834/1024 [MB] (17 MBps) [2024-10-13T17:59:14.129Z] Copying: 847/1024 [MB] (12 MBps) [2024-10-13T17:59:15.515Z] Copying: 867/1024 [MB] (19 MBps) [2024-10-13T17:59:16.088Z] Copying: 884/1024 [MB] (17 MBps) [2024-10-13T17:59:17.477Z] Copying: 896/1024 [MB] (12 MBps) [2024-10-13T17:59:18.422Z] Copying: 908/1024 [MB] (11 MBps) [2024-10-13T17:59:19.366Z] Copying: 922/1024 [MB] (13 MBps) [2024-10-13T17:59:20.310Z] Copying: 941/1024 [MB] (18 MBps) [2024-10-13T17:59:21.275Z] Copying: 957/1024 [MB] (16 MBps) [2024-10-13T17:59:22.237Z] Copying: 970/1024 [MB] (13 MBps) [2024-10-13T17:59:23.180Z] Copying: 986/1024 [MB] (15 MBps) [2024-10-13T17:59:24.121Z] Copying: 1007/1024 [MB] (20 MBps) [2024-10-13T17:59:24.383Z] Copying: 1022/1024 [MB] (14 MBps) [2024-10-13T17:59:24.383Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-10-13 17:59:24.233655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.569 [2024-10-13 17:59:24.233694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:34.569 [2024-10-13 17:59:24.233707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:34.569 [2024-10-13 17:59:24.233714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.569 [2024-10-13 17:59:24.233730] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:34.569 [2024-10-13 17:59:24.236026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.569 [2024-10-13 17:59:24.236052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:34.569 [2024-10-13 17:59:24.236061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.284 ms 00:29:34.569 [2024-10-13 17:59:24.236068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.569 [2024-10-13 17:59:24.238234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.569 [2024-10-13 17:59:24.238264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:34.569 [2024-10-13 17:59:24.238272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.150 ms 00:29:34.569 [2024-10-13 17:59:24.238279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.569 [2024-10-13 17:59:24.238299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.569 [2024-10-13 17:59:24.238306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:34.569 [2024-10-13 17:59:24.238313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:34.569 [2024-10-13 17:59:24.238319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.569 [2024-10-13 17:59:24.238359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.569 [2024-10-13 17:59:24.238366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:34.569 [2024-10-13 17:59:24.238374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:29:34.569 [2024-10-13 17:59:24.238380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.569 [2024-10-13 17:59:24.238390] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:34.569 [2024-10-13 17:59:24.238400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:34.569 [2024-10-13 17:59:24.238958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:34.570 [2024-10-13 17:59:24.238965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:34.570 [2024-10-13 17:59:24.238971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:34.570 [2024-10-13 17:59:24.238977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:34.570 [2024-10-13 17:59:24.238982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:34.570 [2024-10-13 17:59:24.238988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:34.570 [2024-10-13 17:59:24.238993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:34.570 [2024-10-13 17:59:24.238999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:34.570 [2024-10-13 17:59:24.239005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:34.570 [2024-10-13 17:59:24.239017] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:34.570 [2024-10-13 17:59:24.239023] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9e25fb9a-cdcc-44c5-8f80-87d7525e4ba6 00:29:34.570 [2024-10-13 17:59:24.239029] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:34.570 [2024-10-13 17:59:24.239035] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:29:34.570 [2024-10-13 17:59:24.239041] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:34.570 [2024-10-13 17:59:24.239047] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:34.570 [2024-10-13 17:59:24.239053] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:34.570 [2024-10-13 17:59:24.239060] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:34.570 [2024-10-13 17:59:24.239066] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:34.570 [2024-10-13 17:59:24.239071] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:34.570 [2024-10-13 17:59:24.239076] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:34.570 [2024-10-13 17:59:24.239081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.570 [2024-10-13 17:59:24.239087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:34.570 [2024-10-13 17:59:24.239093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.691 ms 00:29:34.570 [2024-10-13 17:59:24.239099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.570 [2024-10-13 17:59:24.249065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.570 [2024-10-13 17:59:24.249091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:34.570 [2024-10-13 17:59:24.249099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.955 ms 00:29:34.570 [2024-10-13 17:59:24.249108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.570 [2024-10-13 17:59:24.249393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.570 [2024-10-13 17:59:24.249400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:34.570 [2024-10-13 17:59:24.249406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:29:34.570 [2024-10-13 17:59:24.249411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.570 [2024-10-13 17:59:24.276801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:34.570 [2024-10-13 17:59:24.276828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:34.570 [2024-10-13 17:59:24.276839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:34.570 [2024-10-13 17:59:24.276845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.570 [2024-10-13 17:59:24.276892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:34.570 [2024-10-13 17:59:24.276899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:34.570 [2024-10-13 17:59:24.276905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:34.570 [2024-10-13 17:59:24.276910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.570 [2024-10-13 17:59:24.276961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:34.570 [2024-10-13 17:59:24.276969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:34.570 [2024-10-13 17:59:24.276975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:34.570 [2024-10-13 17:59:24.276983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.570 [2024-10-13 17:59:24.276995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:34.570 [2024-10-13 17:59:24.277002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:34.570 [2024-10-13 17:59:24.277008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:34.570 [2024-10-13 17:59:24.277013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.570 [2024-10-13 17:59:24.339987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:34.570 [2024-10-13 17:59:24.340022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:34.570 [2024-10-13 17:59:24.340031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:34.570 [2024-10-13 17:59:24.340041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.830 [2024-10-13 17:59:24.391241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:34.830 [2024-10-13 17:59:24.391278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:34.831 [2024-10-13 17:59:24.391288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:34.831 [2024-10-13 17:59:24.391295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.831 [2024-10-13 17:59:24.391344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:34.831 [2024-10-13 17:59:24.391352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:34.831 [2024-10-13 17:59:24.391358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:34.831 [2024-10-13 17:59:24.391365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.831 [2024-10-13 17:59:24.391414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:34.831 [2024-10-13 17:59:24.391422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:34.831 [2024-10-13 17:59:24.391429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:34.831 [2024-10-13 17:59:24.391436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.831 [2024-10-13 17:59:24.391496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:34.831 [2024-10-13 17:59:24.391504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:34.831 [2024-10-13 17:59:24.391510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:34.831 [2024-10-13 17:59:24.391517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.831 [2024-10-13 17:59:24.391576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:34.831 [2024-10-13 17:59:24.391587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:34.831 [2024-10-13 17:59:24.391594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:34.831 [2024-10-13 17:59:24.391600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.831 [2024-10-13 17:59:24.391636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:34.831 [2024-10-13 17:59:24.391643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:34.831 [2024-10-13 17:59:24.391650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:34.831 [2024-10-13 17:59:24.391655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.831 [2024-10-13 17:59:24.391698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:34.831 [2024-10-13 17:59:24.391707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:34.831 [2024-10-13 17:59:24.391714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:34.831 [2024-10-13 17:59:24.391720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.831 [2024-10-13 17:59:24.391825] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 158.141 ms, result 0 00:29:35.773 00:29:35.773 00:29:35.773 17:59:25 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:29:35.773 [2024-10-13 17:59:25.429664] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:29:35.773 [2024-10-13 17:59:25.429776] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82745 ] 00:29:35.773 [2024-10-13 17:59:25.579888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.033 [2024-10-13 17:59:25.676391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.295 [2024-10-13 17:59:25.937156] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:36.295 [2024-10-13 17:59:25.937228] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:36.295 [2024-10-13 17:59:26.099851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.295 [2024-10-13 17:59:26.099913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:36.295 [2024-10-13 17:59:26.099929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:36.295 [2024-10-13 17:59:26.099943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.295 [2024-10-13 17:59:26.099998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.295 [2024-10-13 17:59:26.100010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:36.295 [2024-10-13 17:59:26.100019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:29:36.295 [2024-10-13 17:59:26.100030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.295 [2024-10-13 17:59:26.100050] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:36.295 [2024-10-13 17:59:26.100814] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:36.295 [2024-10-13 17:59:26.100838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.295 [2024-10-13 17:59:26.100853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:36.295 [2024-10-13 17:59:26.100865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.793 ms 00:29:36.295 [2024-10-13 17:59:26.100874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.295 [2024-10-13 17:59:26.101168] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:29:36.295 [2024-10-13 17:59:26.101198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.295 [2024-10-13 17:59:26.101208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:36.295 [2024-10-13 17:59:26.101219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:29:36.295 [2024-10-13 17:59:26.101231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.295 [2024-10-13 17:59:26.101292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.295 [2024-10-13 17:59:26.101303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:36.295 [2024-10-13 17:59:26.101311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:29:36.295 [2024-10-13 17:59:26.101320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.295 [2024-10-13 17:59:26.101652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.295 [2024-10-13 17:59:26.101668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:36.295 [2024-10-13 17:59:26.101682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:29:36.295 [2024-10-13 17:59:26.101689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.295 [2024-10-13 17:59:26.101760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.295 [2024-10-13 17:59:26.101772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:36.295 [2024-10-13 17:59:26.101780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:29:36.295 [2024-10-13 17:59:26.101789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.295 [2024-10-13 17:59:26.101812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.295 [2024-10-13 17:59:26.101821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:36.296 [2024-10-13 17:59:26.101831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:36.296 [2024-10-13 17:59:26.101840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.296 [2024-10-13 17:59:26.101866] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:36.296 [2024-10-13 17:59:26.106351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.296 [2024-10-13 17:59:26.106401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:36.296 [2024-10-13 17:59:26.106416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.489 ms 00:29:36.296 [2024-10-13 17:59:26.106424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.296 [2024-10-13 17:59:26.106461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.296 [2024-10-13 17:59:26.106470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:36.296 [2024-10-13 17:59:26.106479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:29:36.296 [2024-10-13 17:59:26.106487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.296 [2024-10-13 17:59:26.106548] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:36.296 [2024-10-13 17:59:26.106589] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:36.296 [2024-10-13 17:59:26.106627] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:36.296 [2024-10-13 17:59:26.106649] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:36.296 [2024-10-13 17:59:26.106754] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:36.296 [2024-10-13 17:59:26.106766] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:36.296 [2024-10-13 17:59:26.106777] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:36.296 [2024-10-13 17:59:26.106790] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:36.296 [2024-10-13 17:59:26.106802] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:36.296 [2024-10-13 17:59:26.106811] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:36.296 [2024-10-13 17:59:26.106819] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:36.296 [2024-10-13 17:59:26.106829] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:36.296 [2024-10-13 17:59:26.106836] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:36.296 [2024-10-13 17:59:26.106844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.296 [2024-10-13 17:59:26.106852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:36.296 [2024-10-13 17:59:26.106860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:29:36.296 [2024-10-13 17:59:26.106868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.296 [2024-10-13 17:59:26.106951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.296 [2024-10-13 17:59:26.106961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:36.296 [2024-10-13 17:59:26.106970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:29:36.296 [2024-10-13 17:59:26.106977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.296 [2024-10-13 17:59:26.107081] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:36.296 [2024-10-13 17:59:26.107093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:36.296 [2024-10-13 17:59:26.107102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:36.296 [2024-10-13 17:59:26.107113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:36.296 [2024-10-13 17:59:26.107121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:36.296 [2024-10-13 17:59:26.107129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:36.296 [2024-10-13 17:59:26.107137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:36.296 [2024-10-13 17:59:26.107144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:36.296 [2024-10-13 17:59:26.107151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:36.296 [2024-10-13 17:59:26.107157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:36.296 [2024-10-13 17:59:26.107167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:36.296 [2024-10-13 17:59:26.107174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:36.296 [2024-10-13 17:59:26.107181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:36.296 [2024-10-13 17:59:26.107188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:36.296 [2024-10-13 17:59:26.107195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:36.296 [2024-10-13 17:59:26.107201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:36.296 [2024-10-13 17:59:26.107207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:36.296 [2024-10-13 17:59:26.107221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:36.296 [2024-10-13 17:59:26.107228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:36.296 [2024-10-13 17:59:26.107235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:36.296 [2024-10-13 17:59:26.107241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:36.296 [2024-10-13 17:59:26.107247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:36.296 [2024-10-13 17:59:26.107254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:36.296 [2024-10-13 17:59:26.107261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:36.296 [2024-10-13 17:59:26.107267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:36.296 [2024-10-13 17:59:26.107273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:36.296 [2024-10-13 17:59:26.107279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:36.558 [2024-10-13 17:59:26.107286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:36.558 [2024-10-13 17:59:26.107295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:36.558 [2024-10-13 17:59:26.107302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:36.558 [2024-10-13 17:59:26.107308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:36.558 [2024-10-13 17:59:26.107315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:36.558 [2024-10-13 17:59:26.107321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:36.558 [2024-10-13 17:59:26.107327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:36.558 [2024-10-13 17:59:26.107334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:36.558 [2024-10-13 17:59:26.107340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:36.558 [2024-10-13 17:59:26.107346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:36.558 [2024-10-13 17:59:26.107352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:36.558 [2024-10-13 17:59:26.107360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:36.558 [2024-10-13 17:59:26.107368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:36.558 [2024-10-13 17:59:26.107374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:36.558 [2024-10-13 17:59:26.107380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:36.558 [2024-10-13 17:59:26.107388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:36.558 [2024-10-13 17:59:26.107394] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:36.558 [2024-10-13 17:59:26.107403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:36.558 [2024-10-13 17:59:26.107410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:36.558 [2024-10-13 17:59:26.107417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:36.558 [2024-10-13 17:59:26.107425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:36.558 [2024-10-13 17:59:26.107432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:36.558 [2024-10-13 17:59:26.107439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:36.558 [2024-10-13 17:59:26.107446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:36.558 [2024-10-13 17:59:26.107453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:36.558 [2024-10-13 17:59:26.107460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:36.558 [2024-10-13 17:59:26.107468] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:36.558 [2024-10-13 17:59:26.107477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:36.558 [2024-10-13 17:59:26.107489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:36.558 [2024-10-13 17:59:26.107496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:36.558 [2024-10-13 17:59:26.107503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:36.558 [2024-10-13 17:59:26.107510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:36.558 [2024-10-13 17:59:26.107519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:36.559 [2024-10-13 17:59:26.107542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:36.559 [2024-10-13 17:59:26.107552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:36.559 [2024-10-13 17:59:26.107573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:36.559 [2024-10-13 17:59:26.107581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:36.559 [2024-10-13 17:59:26.107588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:36.559 [2024-10-13 17:59:26.107595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:36.559 [2024-10-13 17:59:26.107602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:36.559 [2024-10-13 17:59:26.107609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:36.559 [2024-10-13 17:59:26.107617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:36.559 [2024-10-13 17:59:26.107623] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:36.559 [2024-10-13 17:59:26.107632] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:36.559 [2024-10-13 17:59:26.107639] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:36.559 [2024-10-13 17:59:26.107657] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:36.559 [2024-10-13 17:59:26.107665] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:36.559 [2024-10-13 17:59:26.107680] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:36.559 [2024-10-13 17:59:26.107689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.559 [2024-10-13 17:59:26.107697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:36.559 [2024-10-13 17:59:26.107705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.676 ms 00:29:36.559 [2024-10-13 17:59:26.107712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.559 [2024-10-13 17:59:26.136401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.559 [2024-10-13 17:59:26.136449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:36.559 [2024-10-13 17:59:26.136461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.645 ms 00:29:36.559 [2024-10-13 17:59:26.136470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.559 [2024-10-13 17:59:26.136567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.559 [2024-10-13 17:59:26.136577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:36.559 [2024-10-13 17:59:26.136586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:29:36.559 [2024-10-13 17:59:26.136594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.559 [2024-10-13 17:59:26.181232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.559 [2024-10-13 17:59:26.181290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:36.559 [2024-10-13 17:59:26.181304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.576 ms 00:29:36.559 [2024-10-13 17:59:26.181313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.559 [2024-10-13 17:59:26.181362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.559 [2024-10-13 17:59:26.181373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:36.559 [2024-10-13 17:59:26.181387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:36.559 [2024-10-13 17:59:26.181395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.559 [2024-10-13 17:59:26.181521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.559 [2024-10-13 17:59:26.181534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:36.559 [2024-10-13 17:59:26.181546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:29:36.559 [2024-10-13 17:59:26.181574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.559 [2024-10-13 17:59:26.181711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.559 [2024-10-13 17:59:26.181723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:36.559 [2024-10-13 17:59:26.181733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:29:36.559 [2024-10-13 17:59:26.181745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.559 [2024-10-13 17:59:26.197838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.559 [2024-10-13 17:59:26.197885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:36.559 [2024-10-13 17:59:26.197901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.072 ms 00:29:36.559 [2024-10-13 17:59:26.197909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.559 [2024-10-13 17:59:26.198061] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:36.559 [2024-10-13 17:59:26.198076] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:36.559 [2024-10-13 17:59:26.198089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.559 [2024-10-13 17:59:26.198097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:36.559 [2024-10-13 17:59:26.198107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:29:36.559 [2024-10-13 17:59:26.198118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.559 [2024-10-13 17:59:26.210427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.559 [2024-10-13 17:59:26.210469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:36.559 [2024-10-13 17:59:26.210481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.292 ms 00:29:36.559 [2024-10-13 17:59:26.210489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.559 [2024-10-13 17:59:26.210633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.559 [2024-10-13 17:59:26.210644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:36.559 [2024-10-13 17:59:26.210656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:29:36.559 [2024-10-13 17:59:26.210664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.559 [2024-10-13 17:59:26.210715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.559 [2024-10-13 17:59:26.210733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:36.559 [2024-10-13 17:59:26.210742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:29:36.559 [2024-10-13 17:59:26.210751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.559 [2024-10-13 17:59:26.211339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.559 [2024-10-13 17:59:26.211353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:36.559 [2024-10-13 17:59:26.211363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:29:36.559 [2024-10-13 17:59:26.211370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.559 [2024-10-13 17:59:26.211386] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:29:36.559 [2024-10-13 17:59:26.211396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.559 [2024-10-13 17:59:26.211406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:36.559 [2024-10-13 17:59:26.211415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:36.559 [2024-10-13 17:59:26.211423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.559 [2024-10-13 17:59:26.223974] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:36.559 [2024-10-13 17:59:26.224142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.559 [2024-10-13 17:59:26.224155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:36.559 [2024-10-13 17:59:26.224166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.701 ms 00:29:36.559 [2024-10-13 17:59:26.224175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.559 [2024-10-13 17:59:26.226322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.559 [2024-10-13 17:59:26.226360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:36.559 [2024-10-13 17:59:26.226374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.121 ms 00:29:36.559 [2024-10-13 17:59:26.226383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.559 [2024-10-13 17:59:26.226482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.559 [2024-10-13 17:59:26.226496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:36.559 [2024-10-13 17:59:26.226507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:29:36.559 [2024-10-13 17:59:26.226517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.559 [2024-10-13 17:59:26.226540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.559 [2024-10-13 17:59:26.226550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:36.559 [2024-10-13 17:59:26.226577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:36.559 [2024-10-13 17:59:26.226591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.560 [2024-10-13 17:59:26.226622] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:36.560 [2024-10-13 17:59:26.226634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.560 [2024-10-13 17:59:26.226643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:36.560 [2024-10-13 17:59:26.226652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:36.560 [2024-10-13 17:59:26.226662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.560 [2024-10-13 17:59:26.253422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.560 [2024-10-13 17:59:26.253478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:36.560 [2024-10-13 17:59:26.253499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.738 ms 00:29:36.560 [2024-10-13 17:59:26.253508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.560 [2024-10-13 17:59:26.253615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.560 [2024-10-13 17:59:26.253628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:36.560 [2024-10-13 17:59:26.253637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:29:36.560 [2024-10-13 17:59:26.253645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.560 [2024-10-13 17:59:26.254834] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 154.503 ms, result 0 00:29:37.947  [2024-10-13T17:59:28.706Z] Copying: 17/1024 [MB] (17 MBps) [2024-10-13T17:59:29.650Z] Copying: 38/1024 [MB] (21 MBps) [2024-10-13T17:59:30.594Z] Copying: 60/1024 [MB] (21 MBps) [2024-10-13T17:59:31.539Z] Copying: 78/1024 [MB] (17 MBps) [2024-10-13T17:59:32.484Z] Copying: 100/1024 [MB] (22 MBps) [2024-10-13T17:59:33.870Z] Copying: 117/1024 [MB] (16 MBps) [2024-10-13T17:59:34.444Z] Copying: 134/1024 [MB] (16 MBps) [2024-10-13T17:59:35.828Z] Copying: 145/1024 [MB] (11 MBps) [2024-10-13T17:59:36.774Z] Copying: 165/1024 [MB] (19 MBps) [2024-10-13T17:59:37.718Z] Copying: 182/1024 [MB] (17 MBps) [2024-10-13T17:59:38.732Z] Copying: 193/1024 [MB] (10 MBps) [2024-10-13T17:59:39.676Z] Copying: 203/1024 [MB] (10 MBps) [2024-10-13T17:59:40.619Z] Copying: 214/1024 [MB] (10 MBps) [2024-10-13T17:59:41.565Z] Copying: 226/1024 [MB] (12 MBps) [2024-10-13T17:59:42.510Z] Copying: 237/1024 [MB] (10 MBps) [2024-10-13T17:59:43.453Z] Copying: 247/1024 [MB] (10 MBps) [2024-10-13T17:59:44.842Z] Copying: 259/1024 [MB] (11 MBps) [2024-10-13T17:59:45.789Z] Copying: 269/1024 [MB] (10 MBps) [2024-10-13T17:59:46.734Z] Copying: 281/1024 [MB] (11 MBps) [2024-10-13T17:59:47.677Z] Copying: 291/1024 [MB] (10 MBps) [2024-10-13T17:59:48.619Z] Copying: 302/1024 [MB] (10 MBps) [2024-10-13T17:59:49.551Z] Copying: 313/1024 [MB] (10 MBps) [2024-10-13T17:59:50.489Z] Copying: 333/1024 [MB] (19 MBps) [2024-10-13T17:59:51.876Z] Copying: 350/1024 [MB] (16 MBps) [2024-10-13T17:59:52.449Z] Copying: 360/1024 [MB] (10 MBps) [2024-10-13T17:59:53.829Z] Copying: 373/1024 [MB] (13 MBps) [2024-10-13T17:59:54.768Z] Copying: 390/1024 [MB] (16 MBps) [2024-10-13T17:59:55.756Z] Copying: 405/1024 [MB] (15 MBps) [2024-10-13T17:59:56.689Z] Copying: 416/1024 [MB] (10 MBps) [2024-10-13T17:59:57.632Z] Copying: 435/1024 [MB] (18 MBps) [2024-10-13T17:59:58.572Z] Copying: 458/1024 [MB] (22 MBps) [2024-10-13T17:59:59.514Z] Copying: 468/1024 [MB] (10 MBps) [2024-10-13T18:00:00.452Z] Copying: 479/1024 [MB] (10 MBps) [2024-10-13T18:00:01.832Z] Copying: 497/1024 [MB] (17 MBps) [2024-10-13T18:00:02.767Z] Copying: 512/1024 [MB] (15 MBps) [2024-10-13T18:00:03.709Z] Copying: 525/1024 [MB] (12 MBps) [2024-10-13T18:00:04.648Z] Copying: 535/1024 [MB] (10 MBps) [2024-10-13T18:00:05.590Z] Copying: 547/1024 [MB] (11 MBps) [2024-10-13T18:00:06.532Z] Copying: 558/1024 [MB] (11 MBps) [2024-10-13T18:00:07.474Z] Copying: 578/1024 [MB] (20 MBps) [2024-10-13T18:00:08.451Z] Copying: 591/1024 [MB] (12 MBps) [2024-10-13T18:00:09.837Z] Copying: 604/1024 [MB] (13 MBps) [2024-10-13T18:00:10.781Z] Copying: 626/1024 [MB] (21 MBps) [2024-10-13T18:00:11.723Z] Copying: 648/1024 [MB] (21 MBps) [2024-10-13T18:00:12.665Z] Copying: 664/1024 [MB] (16 MBps) [2024-10-13T18:00:13.611Z] Copying: 692/1024 [MB] (27 MBps) [2024-10-13T18:00:14.556Z] Copying: 710/1024 [MB] (17 MBps) [2024-10-13T18:00:15.499Z] Copying: 727/1024 [MB] (16 MBps) [2024-10-13T18:00:16.440Z] Copying: 738/1024 [MB] (10 MBps) [2024-10-13T18:00:17.821Z] Copying: 748/1024 [MB] (10 MBps) [2024-10-13T18:00:18.760Z] Copying: 759/1024 [MB] (10 MBps) [2024-10-13T18:00:19.705Z] Copying: 781/1024 [MB] (22 MBps) [2024-10-13T18:00:20.646Z] Copying: 792/1024 [MB] (10 MBps) [2024-10-13T18:00:21.586Z] Copying: 804/1024 [MB] (11 MBps) [2024-10-13T18:00:22.528Z] Copying: 822/1024 [MB] (18 MBps) [2024-10-13T18:00:23.472Z] Copying: 833/1024 [MB] (11 MBps) [2024-10-13T18:00:24.856Z] Copying: 844/1024 [MB] (10 MBps) [2024-10-13T18:00:25.795Z] Copying: 855/1024 [MB] (10 MBps) [2024-10-13T18:00:26.737Z] Copying: 869/1024 [MB] (14 MBps) [2024-10-13T18:00:27.679Z] Copying: 880/1024 [MB] (10 MBps) [2024-10-13T18:00:28.620Z] Copying: 893/1024 [MB] (13 MBps) [2024-10-13T18:00:29.559Z] Copying: 904/1024 [MB] (10 MBps) [2024-10-13T18:00:30.500Z] Copying: 915/1024 [MB] (10 MBps) [2024-10-13T18:00:31.438Z] Copying: 934/1024 [MB] (19 MBps) [2024-10-13T18:00:32.821Z] Copying: 948/1024 [MB] (13 MBps) [2024-10-13T18:00:33.763Z] Copying: 959/1024 [MB] (10 MBps) [2024-10-13T18:00:34.701Z] Copying: 969/1024 [MB] (10 MBps) [2024-10-13T18:00:35.644Z] Copying: 983/1024 [MB] (13 MBps) [2024-10-13T18:00:36.587Z] Copying: 996/1024 [MB] (13 MBps) [2024-10-13T18:00:37.532Z] Copying: 1010/1024 [MB] (13 MBps) [2024-10-13T18:00:37.532Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-10-13 18:00:37.359734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.718 [2024-10-13 18:00:37.359824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:47.718 [2024-10-13 18:00:37.359844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:47.718 [2024-10-13 18:00:37.359853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.718 [2024-10-13 18:00:37.359877] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:47.718 [2024-10-13 18:00:37.363199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.718 [2024-10-13 18:00:37.363258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:47.718 [2024-10-13 18:00:37.363271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.304 ms 00:30:47.718 [2024-10-13 18:00:37.363280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.718 [2024-10-13 18:00:37.363534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.718 [2024-10-13 18:00:37.363554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:47.718 [2024-10-13 18:00:37.363578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:30:47.718 [2024-10-13 18:00:37.363586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.718 [2024-10-13 18:00:37.363621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.718 [2024-10-13 18:00:37.363631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:47.718 [2024-10-13 18:00:37.363644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:47.718 [2024-10-13 18:00:37.363653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.718 [2024-10-13 18:00:37.363721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.718 [2024-10-13 18:00:37.363737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:47.718 [2024-10-13 18:00:37.363746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:30:47.718 [2024-10-13 18:00:37.363754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.718 [2024-10-13 18:00:37.363769] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:47.718 [2024-10-13 18:00:37.363784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:47.718 [2024-10-13 18:00:37.363793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:47.718 [2024-10-13 18:00:37.363801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:47.718 [2024-10-13 18:00:37.363808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:47.718 [2024-10-13 18:00:37.363816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:47.718 [2024-10-13 18:00:37.363824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:47.718 [2024-10-13 18:00:37.363832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:47.718 [2024-10-13 18:00:37.363839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:47.718 [2024-10-13 18:00:37.363847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:47.718 [2024-10-13 18:00:37.363854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:47.718 [2024-10-13 18:00:37.363862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:47.718 [2024-10-13 18:00:37.363869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:47.718 [2024-10-13 18:00:37.363877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:47.718 [2024-10-13 18:00:37.363885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:47.718 [2024-10-13 18:00:37.363893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:47.718 [2024-10-13 18:00:37.363901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:47.718 [2024-10-13 18:00:37.363909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:47.718 [2024-10-13 18:00:37.363916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:47.718 [2024-10-13 18:00:37.363923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:47.718 [2024-10-13 18:00:37.363931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:47.718 [2024-10-13 18:00:37.363938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:47.718 [2024-10-13 18:00:37.363946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:47.718 [2024-10-13 18:00:37.363953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:47.718 [2024-10-13 18:00:37.363961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:47.718 [2024-10-13 18:00:37.363969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:47.718 [2024-10-13 18:00:37.363980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:47.718 [2024-10-13 18:00:37.363990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:47.718 [2024-10-13 18:00:37.363998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:47.718 [2024-10-13 18:00:37.364006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:47.718 [2024-10-13 18:00:37.364014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:47.718 [2024-10-13 18:00:37.364023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:47.718 [2024-10-13 18:00:37.364031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:47.718 [2024-10-13 18:00:37.364039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:47.718 [2024-10-13 18:00:37.364047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:47.718 [2024-10-13 18:00:37.364054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:47.718 [2024-10-13 18:00:37.364061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:47.718 [2024-10-13 18:00:37.364069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:47.718 [2024-10-13 18:00:37.364076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:47.719 [2024-10-13 18:00:37.364603] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:47.719 [2024-10-13 18:00:37.364613] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9e25fb9a-cdcc-44c5-8f80-87d7525e4ba6 00:30:47.719 [2024-10-13 18:00:37.364622] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:47.719 [2024-10-13 18:00:37.364633] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:30:47.719 [2024-10-13 18:00:37.364640] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:47.719 [2024-10-13 18:00:37.364650] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:47.719 [2024-10-13 18:00:37.364658] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:47.719 [2024-10-13 18:00:37.364668] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:47.719 [2024-10-13 18:00:37.364677] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:47.719 [2024-10-13 18:00:37.364684] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:47.719 [2024-10-13 18:00:37.364691] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:47.719 [2024-10-13 18:00:37.364698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.719 [2024-10-13 18:00:37.364707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:47.719 [2024-10-13 18:00:37.364715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.931 ms 00:30:47.719 [2024-10-13 18:00:37.364724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.719 [2024-10-13 18:00:37.380373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.719 [2024-10-13 18:00:37.380432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:47.719 [2024-10-13 18:00:37.380445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.632 ms 00:30:47.719 [2024-10-13 18:00:37.380453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.719 [2024-10-13 18:00:37.380891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.719 [2024-10-13 18:00:37.380908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:47.719 [2024-10-13 18:00:37.380919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:30:47.719 [2024-10-13 18:00:37.380927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.719 [2024-10-13 18:00:37.420363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:47.719 [2024-10-13 18:00:37.420417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:47.719 [2024-10-13 18:00:37.420429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:47.719 [2024-10-13 18:00:37.420440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.719 [2024-10-13 18:00:37.420523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:47.719 [2024-10-13 18:00:37.420533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:47.719 [2024-10-13 18:00:37.420543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:47.719 [2024-10-13 18:00:37.420569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.719 [2024-10-13 18:00:37.420634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:47.719 [2024-10-13 18:00:37.420647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:47.719 [2024-10-13 18:00:37.420656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:47.720 [2024-10-13 18:00:37.420665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.720 [2024-10-13 18:00:37.420684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:47.720 [2024-10-13 18:00:37.420693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:47.720 [2024-10-13 18:00:37.420702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:47.720 [2024-10-13 18:00:37.420709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.720 [2024-10-13 18:00:37.512942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:47.720 [2024-10-13 18:00:37.513011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:47.720 [2024-10-13 18:00:37.513027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:47.720 [2024-10-13 18:00:37.513037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.980 [2024-10-13 18:00:37.587040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:47.980 [2024-10-13 18:00:37.587106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:47.980 [2024-10-13 18:00:37.587121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:47.980 [2024-10-13 18:00:37.587131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.980 [2024-10-13 18:00:37.587237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:47.980 [2024-10-13 18:00:37.587256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:47.980 [2024-10-13 18:00:37.587267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:47.980 [2024-10-13 18:00:37.587276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.980 [2024-10-13 18:00:37.587323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:47.980 [2024-10-13 18:00:37.587339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:47.980 [2024-10-13 18:00:37.587348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:47.980 [2024-10-13 18:00:37.587356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.980 [2024-10-13 18:00:37.587443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:47.980 [2024-10-13 18:00:37.587455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:47.980 [2024-10-13 18:00:37.587468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:47.980 [2024-10-13 18:00:37.587477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.980 [2024-10-13 18:00:37.587553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:47.980 [2024-10-13 18:00:37.587640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:47.980 [2024-10-13 18:00:37.587650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:47.980 [2024-10-13 18:00:37.587659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.980 [2024-10-13 18:00:37.587714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:47.980 [2024-10-13 18:00:37.587748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:47.980 [2024-10-13 18:00:37.587761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:47.980 [2024-10-13 18:00:37.587770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.980 [2024-10-13 18:00:37.587829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:47.980 [2024-10-13 18:00:37.587847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:47.980 [2024-10-13 18:00:37.587856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:47.980 [2024-10-13 18:00:37.587866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.980 [2024-10-13 18:00:37.588029] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 228.249 ms, result 0 00:30:48.551 00:30:48.551 00:30:48.812 18:00:38 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:50.722 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:50.722 18:00:40 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:30:50.984 [2024-10-13 18:00:40.567752] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:30:50.984 [2024-10-13 18:00:40.567904] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83493 ] 00:30:50.984 [2024-10-13 18:00:40.718898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:51.244 [2024-10-13 18:00:40.835488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:51.505 [2024-10-13 18:00:41.163393] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:51.505 [2024-10-13 18:00:41.163498] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:51.768 [2024-10-13 18:00:41.331770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.768 [2024-10-13 18:00:41.331841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:51.768 [2024-10-13 18:00:41.331859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:51.768 [2024-10-13 18:00:41.331874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.768 [2024-10-13 18:00:41.331938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.768 [2024-10-13 18:00:41.331950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:51.768 [2024-10-13 18:00:41.331960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:30:51.768 [2024-10-13 18:00:41.331972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.768 [2024-10-13 18:00:41.331995] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:51.768 [2024-10-13 18:00:41.332820] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:51.768 [2024-10-13 18:00:41.332874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.768 [2024-10-13 18:00:41.332886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:51.768 [2024-10-13 18:00:41.332897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.886 ms 00:30:51.768 [2024-10-13 18:00:41.332905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.768 [2024-10-13 18:00:41.333262] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:51.768 [2024-10-13 18:00:41.333299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.768 [2024-10-13 18:00:41.333309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:51.768 [2024-10-13 18:00:41.333319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:30:51.768 [2024-10-13 18:00:41.333331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.768 [2024-10-13 18:00:41.333394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.768 [2024-10-13 18:00:41.333404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:51.768 [2024-10-13 18:00:41.333413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:30:51.768 [2024-10-13 18:00:41.333422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.768 [2024-10-13 18:00:41.333734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.768 [2024-10-13 18:00:41.333755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:51.769 [2024-10-13 18:00:41.333767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:30:51.769 [2024-10-13 18:00:41.333775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.769 [2024-10-13 18:00:41.333853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.769 [2024-10-13 18:00:41.333862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:51.769 [2024-10-13 18:00:41.333871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:30:51.769 [2024-10-13 18:00:41.333881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.769 [2024-10-13 18:00:41.333912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.769 [2024-10-13 18:00:41.333922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:51.769 [2024-10-13 18:00:41.333930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:51.769 [2024-10-13 18:00:41.333939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.769 [2024-10-13 18:00:41.333964] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:51.769 [2024-10-13 18:00:41.339004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.769 [2024-10-13 18:00:41.339048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:51.769 [2024-10-13 18:00:41.339062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.043 ms 00:30:51.769 [2024-10-13 18:00:41.339070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.769 [2024-10-13 18:00:41.339109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.769 [2024-10-13 18:00:41.339117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:51.769 [2024-10-13 18:00:41.339126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:30:51.769 [2024-10-13 18:00:41.339134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.769 [2024-10-13 18:00:41.339178] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:51.769 [2024-10-13 18:00:41.339204] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:51.769 [2024-10-13 18:00:41.339243] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:51.769 [2024-10-13 18:00:41.339263] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:51.769 [2024-10-13 18:00:41.339374] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:51.769 [2024-10-13 18:00:41.339386] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:51.769 [2024-10-13 18:00:41.339397] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:51.769 [2024-10-13 18:00:41.339409] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:51.769 [2024-10-13 18:00:41.339419] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:51.769 [2024-10-13 18:00:41.339427] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:51.769 [2024-10-13 18:00:41.339436] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:51.769 [2024-10-13 18:00:41.339447] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:51.769 [2024-10-13 18:00:41.339455] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:51.769 [2024-10-13 18:00:41.339463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.769 [2024-10-13 18:00:41.339471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:51.769 [2024-10-13 18:00:41.339495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:30:51.769 [2024-10-13 18:00:41.339503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.769 [2024-10-13 18:00:41.339602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.769 [2024-10-13 18:00:41.339620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:51.769 [2024-10-13 18:00:41.339629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:30:51.769 [2024-10-13 18:00:41.339636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.769 [2024-10-13 18:00:41.339747] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:51.769 [2024-10-13 18:00:41.339765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:51.769 [2024-10-13 18:00:41.339775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:51.769 [2024-10-13 18:00:41.339783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:51.769 [2024-10-13 18:00:41.339791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:51.769 [2024-10-13 18:00:41.339798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:51.769 [2024-10-13 18:00:41.339805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:51.769 [2024-10-13 18:00:41.339812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:51.769 [2024-10-13 18:00:41.339819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:51.769 [2024-10-13 18:00:41.339826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:51.769 [2024-10-13 18:00:41.339833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:51.769 [2024-10-13 18:00:41.339840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:51.769 [2024-10-13 18:00:41.339847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:51.769 [2024-10-13 18:00:41.339854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:51.769 [2024-10-13 18:00:41.339861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:51.769 [2024-10-13 18:00:41.339870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:51.769 [2024-10-13 18:00:41.339878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:51.769 [2024-10-13 18:00:41.339891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:51.769 [2024-10-13 18:00:41.339899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:51.769 [2024-10-13 18:00:41.339906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:51.769 [2024-10-13 18:00:41.339913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:51.769 [2024-10-13 18:00:41.339919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:51.769 [2024-10-13 18:00:41.339927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:51.769 [2024-10-13 18:00:41.339934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:51.769 [2024-10-13 18:00:41.339940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:51.769 [2024-10-13 18:00:41.339947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:51.769 [2024-10-13 18:00:41.339955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:51.769 [2024-10-13 18:00:41.339961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:51.769 [2024-10-13 18:00:41.339968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:51.769 [2024-10-13 18:00:41.339976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:51.769 [2024-10-13 18:00:41.339982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:51.769 [2024-10-13 18:00:41.339989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:51.769 [2024-10-13 18:00:41.339995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:51.769 [2024-10-13 18:00:41.340002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:51.769 [2024-10-13 18:00:41.340009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:51.769 [2024-10-13 18:00:41.340015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:51.769 [2024-10-13 18:00:41.340022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:51.769 [2024-10-13 18:00:41.340028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:51.769 [2024-10-13 18:00:41.340037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:51.769 [2024-10-13 18:00:41.340044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:51.769 [2024-10-13 18:00:41.340051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:51.769 [2024-10-13 18:00:41.340057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:51.769 [2024-10-13 18:00:41.340064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:51.769 [2024-10-13 18:00:41.340070] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:51.769 [2024-10-13 18:00:41.340077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:51.769 [2024-10-13 18:00:41.340084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:51.769 [2024-10-13 18:00:41.340091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:51.769 [2024-10-13 18:00:41.340103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:51.769 [2024-10-13 18:00:41.340111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:51.769 [2024-10-13 18:00:41.340118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:51.769 [2024-10-13 18:00:41.340125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:51.769 [2024-10-13 18:00:41.340132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:51.769 [2024-10-13 18:00:41.340139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:51.769 [2024-10-13 18:00:41.340148] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:51.769 [2024-10-13 18:00:41.340157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:51.769 [2024-10-13 18:00:41.340168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:51.769 [2024-10-13 18:00:41.340176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:51.769 [2024-10-13 18:00:41.340184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:51.769 [2024-10-13 18:00:41.340190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:51.769 [2024-10-13 18:00:41.340197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:51.769 [2024-10-13 18:00:41.340204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:51.769 [2024-10-13 18:00:41.340211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:51.769 [2024-10-13 18:00:41.340218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:51.769 [2024-10-13 18:00:41.340225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:51.769 [2024-10-13 18:00:41.340232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:51.769 [2024-10-13 18:00:41.340239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:51.769 [2024-10-13 18:00:41.340246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:51.770 [2024-10-13 18:00:41.340255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:51.770 [2024-10-13 18:00:41.340262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:51.770 [2024-10-13 18:00:41.340269] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:51.770 [2024-10-13 18:00:41.340278] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:51.770 [2024-10-13 18:00:41.340286] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:51.770 [2024-10-13 18:00:41.340294] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:51.770 [2024-10-13 18:00:41.340301] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:51.770 [2024-10-13 18:00:41.340310] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:51.770 [2024-10-13 18:00:41.340318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.770 [2024-10-13 18:00:41.340326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:51.770 [2024-10-13 18:00:41.340334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.640 ms 00:30:51.770 [2024-10-13 18:00:41.340341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.770 [2024-10-13 18:00:41.372176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.770 [2024-10-13 18:00:41.372225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:51.770 [2024-10-13 18:00:41.372238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.790 ms 00:30:51.770 [2024-10-13 18:00:41.372247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.770 [2024-10-13 18:00:41.372333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.770 [2024-10-13 18:00:41.372343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:51.770 [2024-10-13 18:00:41.372351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:30:51.770 [2024-10-13 18:00:41.372359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.770 [2024-10-13 18:00:41.425891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.770 [2024-10-13 18:00:41.425952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:51.770 [2024-10-13 18:00:41.425966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.470 ms 00:30:51.770 [2024-10-13 18:00:41.425975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.770 [2024-10-13 18:00:41.426027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.770 [2024-10-13 18:00:41.426038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:51.770 [2024-10-13 18:00:41.426052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:51.770 [2024-10-13 18:00:41.426061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.770 [2024-10-13 18:00:41.426193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.770 [2024-10-13 18:00:41.426206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:51.770 [2024-10-13 18:00:41.426215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:30:51.770 [2024-10-13 18:00:41.426224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.770 [2024-10-13 18:00:41.426369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.770 [2024-10-13 18:00:41.426380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:51.770 [2024-10-13 18:00:41.426392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:30:51.770 [2024-10-13 18:00:41.426401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.770 [2024-10-13 18:00:41.444588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.770 [2024-10-13 18:00:41.444642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:51.770 [2024-10-13 18:00:41.444654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.166 ms 00:30:51.770 [2024-10-13 18:00:41.444663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.770 [2024-10-13 18:00:41.444805] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:51.770 [2024-10-13 18:00:41.444820] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:51.770 [2024-10-13 18:00:41.444830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.770 [2024-10-13 18:00:41.444840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:51.770 [2024-10-13 18:00:41.444850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:30:51.770 [2024-10-13 18:00:41.444862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.770 [2024-10-13 18:00:41.457184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.770 [2024-10-13 18:00:41.457225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:51.770 [2024-10-13 18:00:41.457237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.305 ms 00:30:51.770 [2024-10-13 18:00:41.457246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.770 [2024-10-13 18:00:41.457386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.770 [2024-10-13 18:00:41.457396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:51.770 [2024-10-13 18:00:41.457405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:30:51.770 [2024-10-13 18:00:41.457414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.770 [2024-10-13 18:00:41.457466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.770 [2024-10-13 18:00:41.457482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:51.770 [2024-10-13 18:00:41.457490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:30:51.770 [2024-10-13 18:00:41.457498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.770 [2024-10-13 18:00:41.458118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.770 [2024-10-13 18:00:41.458133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:51.770 [2024-10-13 18:00:41.458141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:30:51.770 [2024-10-13 18:00:41.458149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.770 [2024-10-13 18:00:41.458167] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:51.770 [2024-10-13 18:00:41.458179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.770 [2024-10-13 18:00:41.458190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:51.770 [2024-10-13 18:00:41.458201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:51.770 [2024-10-13 18:00:41.458209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.770 [2024-10-13 18:00:41.472348] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:51.770 [2024-10-13 18:00:41.472526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.770 [2024-10-13 18:00:41.472537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:51.770 [2024-10-13 18:00:41.472551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.297 ms 00:30:51.770 [2024-10-13 18:00:41.472580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.770 [2024-10-13 18:00:41.474786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.770 [2024-10-13 18:00:41.474818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:51.770 [2024-10-13 18:00:41.474832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.181 ms 00:30:51.770 [2024-10-13 18:00:41.474840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.770 [2024-10-13 18:00:41.474943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.770 [2024-10-13 18:00:41.474955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:51.770 [2024-10-13 18:00:41.474966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:30:51.770 [2024-10-13 18:00:41.474974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.770 [2024-10-13 18:00:41.475003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.770 [2024-10-13 18:00:41.475012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:51.770 [2024-10-13 18:00:41.475022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:51.770 [2024-10-13 18:00:41.475035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.770 [2024-10-13 18:00:41.475071] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:51.770 [2024-10-13 18:00:41.475081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.770 [2024-10-13 18:00:41.475089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:51.770 [2024-10-13 18:00:41.475098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:51.770 [2024-10-13 18:00:41.475105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.770 [2024-10-13 18:00:41.504019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.770 [2024-10-13 18:00:41.504073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:51.770 [2024-10-13 18:00:41.504094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.891 ms 00:30:51.770 [2024-10-13 18:00:41.504103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.770 [2024-10-13 18:00:41.504196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.770 [2024-10-13 18:00:41.504207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:51.770 [2024-10-13 18:00:41.504216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:30:51.770 [2024-10-13 18:00:41.504225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.770 [2024-10-13 18:00:41.505795] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 173.446 ms, result 0 00:30:52.715  [2024-10-13T18:00:43.916Z] Copying: 22/1024 [MB] (22 MBps) [2024-10-13T18:00:44.860Z] Copying: 40/1024 [MB] (18 MBps) [2024-10-13T18:00:45.829Z] Copying: 55/1024 [MB] (14 MBps) [2024-10-13T18:00:46.787Z] Copying: 78/1024 [MB] (22 MBps) [2024-10-13T18:00:47.731Z] Copying: 91/1024 [MB] (12 MBps) [2024-10-13T18:00:48.675Z] Copying: 107/1024 [MB] (16 MBps) [2024-10-13T18:00:49.620Z] Copying: 121/1024 [MB] (13 MBps) [2024-10-13T18:00:50.563Z] Copying: 138/1024 [MB] (17 MBps) [2024-10-13T18:00:51.951Z] Copying: 155/1024 [MB] (16 MBps) [2024-10-13T18:00:52.523Z] Copying: 170/1024 [MB] (14 MBps) [2024-10-13T18:00:53.910Z] Copying: 181/1024 [MB] (11 MBps) [2024-10-13T18:00:54.854Z] Copying: 199/1024 [MB] (17 MBps) [2024-10-13T18:00:55.797Z] Copying: 215/1024 [MB] (15 MBps) [2024-10-13T18:00:56.742Z] Copying: 233/1024 [MB] (18 MBps) [2024-10-13T18:00:57.686Z] Copying: 252/1024 [MB] (19 MBps) [2024-10-13T18:00:58.628Z] Copying: 271/1024 [MB] (18 MBps) [2024-10-13T18:00:59.568Z] Copying: 289/1024 [MB] (18 MBps) [2024-10-13T18:01:00.955Z] Copying: 300/1024 [MB] (10 MBps) [2024-10-13T18:01:01.527Z] Copying: 319/1024 [MB] (19 MBps) [2024-10-13T18:01:02.950Z] Copying: 329/1024 [MB] (10 MBps) [2024-10-13T18:01:03.522Z] Copying: 339/1024 [MB] (10 MBps) [2024-10-13T18:01:04.909Z] Copying: 353/1024 [MB] (13 MBps) [2024-10-13T18:01:05.857Z] Copying: 367/1024 [MB] (13 MBps) [2024-10-13T18:01:06.800Z] Copying: 379/1024 [MB] (12 MBps) [2024-10-13T18:01:07.745Z] Copying: 393/1024 [MB] (13 MBps) [2024-10-13T18:01:08.688Z] Copying: 411/1024 [MB] (18 MBps) [2024-10-13T18:01:09.633Z] Copying: 423/1024 [MB] (12 MBps) [2024-10-13T18:01:10.575Z] Copying: 443/1024 [MB] (19 MBps) [2024-10-13T18:01:11.521Z] Copying: 457/1024 [MB] (14 MBps) [2024-10-13T18:01:12.909Z] Copying: 476/1024 [MB] (19 MBps) [2024-10-13T18:01:13.853Z] Copying: 492/1024 [MB] (15 MBps) [2024-10-13T18:01:14.797Z] Copying: 511/1024 [MB] (19 MBps) [2024-10-13T18:01:15.740Z] Copying: 536/1024 [MB] (25 MBps) [2024-10-13T18:01:16.683Z] Copying: 550/1024 [MB] (13 MBps) [2024-10-13T18:01:17.626Z] Copying: 562/1024 [MB] (11 MBps) [2024-10-13T18:01:18.570Z] Copying: 578/1024 [MB] (15 MBps) [2024-10-13T18:01:19.959Z] Copying: 590/1024 [MB] (12 MBps) [2024-10-13T18:01:20.575Z] Copying: 601/1024 [MB] (10 MBps) [2024-10-13T18:01:21.519Z] Copying: 613/1024 [MB] (11 MBps) [2024-10-13T18:01:22.907Z] Copying: 624/1024 [MB] (11 MBps) [2024-10-13T18:01:23.851Z] Copying: 635/1024 [MB] (10 MBps) [2024-10-13T18:01:24.794Z] Copying: 652/1024 [MB] (16 MBps) [2024-10-13T18:01:25.738Z] Copying: 669/1024 [MB] (17 MBps) [2024-10-13T18:01:26.683Z] Copying: 683/1024 [MB] (13 MBps) [2024-10-13T18:01:27.625Z] Copying: 705/1024 [MB] (22 MBps) [2024-10-13T18:01:28.569Z] Copying: 721/1024 [MB] (15 MBps) [2024-10-13T18:01:29.955Z] Copying: 738/1024 [MB] (17 MBps) [2024-10-13T18:01:30.527Z] Copying: 755/1024 [MB] (16 MBps) [2024-10-13T18:01:31.914Z] Copying: 777/1024 [MB] (21 MBps) [2024-10-13T18:01:32.859Z] Copying: 799/1024 [MB] (22 MBps) [2024-10-13T18:01:33.803Z] Copying: 817/1024 [MB] (18 MBps) [2024-10-13T18:01:34.748Z] Copying: 835/1024 [MB] (18 MBps) [2024-10-13T18:01:35.692Z] Copying: 848/1024 [MB] (13 MBps) [2024-10-13T18:01:36.636Z] Copying: 863/1024 [MB] (14 MBps) [2024-10-13T18:01:37.641Z] Copying: 877/1024 [MB] (14 MBps) [2024-10-13T18:01:38.584Z] Copying: 887/1024 [MB] (10 MBps) [2024-10-13T18:01:39.529Z] Copying: 900/1024 [MB] (13 MBps) [2024-10-13T18:01:40.917Z] Copying: 921/1024 [MB] (20 MBps) [2024-10-13T18:01:41.863Z] Copying: 947/1024 [MB] (26 MBps) [2024-10-13T18:01:42.806Z] Copying: 959/1024 [MB] (11 MBps) [2024-10-13T18:01:43.750Z] Copying: 975/1024 [MB] (15 MBps) [2024-10-13T18:01:44.694Z] Copying: 985/1024 [MB] (10 MBps) [2024-10-13T18:01:45.639Z] Copying: 1000/1024 [MB] (14 MBps) [2024-10-13T18:01:46.583Z] Copying: 1013/1024 [MB] (12 MBps) [2024-10-13T18:01:47.156Z] Copying: 1023/1024 [MB] (10 MBps) [2024-10-13T18:01:47.156Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-10-13 18:01:47.046716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.342 [2024-10-13 18:01:47.046950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:57.342 [2024-10-13 18:01:47.046990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:57.342 [2024-10-13 18:01:47.047000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.342 [2024-10-13 18:01:47.049475] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:57.342 [2024-10-13 18:01:47.054344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.342 [2024-10-13 18:01:47.054395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:57.342 [2024-10-13 18:01:47.054407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.818 ms 00:31:57.342 [2024-10-13 18:01:47.054418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.342 [2024-10-13 18:01:47.068466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.342 [2024-10-13 18:01:47.068518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:57.342 [2024-10-13 18:01:47.068538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.978 ms 00:31:57.342 [2024-10-13 18:01:47.068548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.342 [2024-10-13 18:01:47.068594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.342 [2024-10-13 18:01:47.068605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:57.342 [2024-10-13 18:01:47.068615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:57.343 [2024-10-13 18:01:47.068624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.343 [2024-10-13 18:01:47.068693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.343 [2024-10-13 18:01:47.068703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:57.343 [2024-10-13 18:01:47.068714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:31:57.343 [2024-10-13 18:01:47.068722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.343 [2024-10-13 18:01:47.068741] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:57.343 [2024-10-13 18:01:47.068756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 80128 / 261120 wr_cnt: 1 state: open 00:31:57.343 [2024-10-13 18:01:47.068766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.068775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.068783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.068791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.068799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.068807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.068815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.068823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.068830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.068838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.068846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.068854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.068861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.068869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.068876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.068883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.068891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.068897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.068905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.068913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.068921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.068929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.068937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.068944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.068955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.068963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.068973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.068981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.068992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:57.343 [2024-10-13 18:01:47.069481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:57.344 [2024-10-13 18:01:47.069490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:57.344 [2024-10-13 18:01:47.069497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:57.344 [2024-10-13 18:01:47.069507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:57.344 [2024-10-13 18:01:47.069518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:57.344 [2024-10-13 18:01:47.069528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:57.344 [2024-10-13 18:01:47.069539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:57.344 [2024-10-13 18:01:47.069548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:57.344 [2024-10-13 18:01:47.069571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:57.344 [2024-10-13 18:01:47.069580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:57.344 [2024-10-13 18:01:47.069588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:57.344 [2024-10-13 18:01:47.069597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:57.344 [2024-10-13 18:01:47.069606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:57.344 [2024-10-13 18:01:47.069624] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:57.344 [2024-10-13 18:01:47.069633] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9e25fb9a-cdcc-44c5-8f80-87d7525e4ba6 00:31:57.344 [2024-10-13 18:01:47.069645] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 80128 00:31:57.344 [2024-10-13 18:01:47.069653] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 80160 00:31:57.344 [2024-10-13 18:01:47.069663] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 80128 00:31:57.344 [2024-10-13 18:01:47.069672] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0004 00:31:57.344 [2024-10-13 18:01:47.069681] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:57.344 [2024-10-13 18:01:47.069692] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:57.344 [2024-10-13 18:01:47.069700] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:57.344 [2024-10-13 18:01:47.069708] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:57.344 [2024-10-13 18:01:47.069715] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:57.344 [2024-10-13 18:01:47.069724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.344 [2024-10-13 18:01:47.069739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:57.344 [2024-10-13 18:01:47.069748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.984 ms 00:31:57.344 [2024-10-13 18:01:47.069756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.344 [2024-10-13 18:01:47.084296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.344 [2024-10-13 18:01:47.084347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:57.344 [2024-10-13 18:01:47.084359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.524 ms 00:31:57.344 [2024-10-13 18:01:47.084367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.344 [2024-10-13 18:01:47.084805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.344 [2024-10-13 18:01:47.084943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:57.344 [2024-10-13 18:01:47.084954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:31:57.344 [2024-10-13 18:01:47.084962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.344 [2024-10-13 18:01:47.124332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:57.344 [2024-10-13 18:01:47.124386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:57.344 [2024-10-13 18:01:47.124398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:57.344 [2024-10-13 18:01:47.124414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.344 [2024-10-13 18:01:47.124486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:57.344 [2024-10-13 18:01:47.124497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:57.344 [2024-10-13 18:01:47.124507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:57.344 [2024-10-13 18:01:47.124517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.344 [2024-10-13 18:01:47.124596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:57.344 [2024-10-13 18:01:47.124611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:57.344 [2024-10-13 18:01:47.124621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:57.344 [2024-10-13 18:01:47.124630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.344 [2024-10-13 18:01:47.124652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:57.344 [2024-10-13 18:01:47.124662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:57.344 [2024-10-13 18:01:47.124670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:57.344 [2024-10-13 18:01:47.124679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.605 [2024-10-13 18:01:47.216719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:57.605 [2024-10-13 18:01:47.216777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:57.605 [2024-10-13 18:01:47.216792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:57.605 [2024-10-13 18:01:47.216809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.605 [2024-10-13 18:01:47.291622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:57.605 [2024-10-13 18:01:47.291684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:57.605 [2024-10-13 18:01:47.291698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:57.605 [2024-10-13 18:01:47.291715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.605 [2024-10-13 18:01:47.291828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:57.605 [2024-10-13 18:01:47.291840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:57.605 [2024-10-13 18:01:47.291851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:57.605 [2024-10-13 18:01:47.291862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.605 [2024-10-13 18:01:47.291904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:57.605 [2024-10-13 18:01:47.291921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:57.605 [2024-10-13 18:01:47.291931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:57.605 [2024-10-13 18:01:47.291939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.605 [2024-10-13 18:01:47.292029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:57.606 [2024-10-13 18:01:47.292039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:57.606 [2024-10-13 18:01:47.292048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:57.606 [2024-10-13 18:01:47.292057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.606 [2024-10-13 18:01:47.292092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:57.606 [2024-10-13 18:01:47.292107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:57.606 [2024-10-13 18:01:47.292116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:57.606 [2024-10-13 18:01:47.292125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.606 [2024-10-13 18:01:47.292178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:57.606 [2024-10-13 18:01:47.292246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:57.606 [2024-10-13 18:01:47.292256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:57.606 [2024-10-13 18:01:47.292265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.606 [2024-10-13 18:01:47.292324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:57.606 [2024-10-13 18:01:47.292349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:57.606 [2024-10-13 18:01:47.292360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:57.606 [2024-10-13 18:01:47.292369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.606 [2024-10-13 18:01:47.292529] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 246.509 ms, result 0 00:31:58.992 00:31:58.992 00:31:58.992 18:01:48 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:31:58.992 [2024-10-13 18:01:48.683176] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:31:58.992 [2024-10-13 18:01:48.683357] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84193 ] 00:31:59.255 [2024-10-13 18:01:48.841045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:59.255 [2024-10-13 18:01:48.986192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:59.516 [2024-10-13 18:01:49.317963] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:59.516 [2024-10-13 18:01:49.318055] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:59.778 [2024-10-13 18:01:49.482755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.778 [2024-10-13 18:01:49.482823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:59.778 [2024-10-13 18:01:49.482842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:59.778 [2024-10-13 18:01:49.482857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.778 [2024-10-13 18:01:49.482917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.778 [2024-10-13 18:01:49.482929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:59.778 [2024-10-13 18:01:49.482939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:31:59.778 [2024-10-13 18:01:49.482951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.778 [2024-10-13 18:01:49.482975] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:59.778 [2024-10-13 18:01:49.484178] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:59.778 [2024-10-13 18:01:49.484246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.778 [2024-10-13 18:01:49.484261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:59.778 [2024-10-13 18:01:49.484273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.278 ms 00:31:59.778 [2024-10-13 18:01:49.484282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.779 [2024-10-13 18:01:49.484970] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:59.779 [2024-10-13 18:01:49.485043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.779 [2024-10-13 18:01:49.485056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:59.779 [2024-10-13 18:01:49.485069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:31:59.779 [2024-10-13 18:01:49.485084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.779 [2024-10-13 18:01:49.485151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.779 [2024-10-13 18:01:49.485161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:59.779 [2024-10-13 18:01:49.485170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:31:59.779 [2024-10-13 18:01:49.485179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.779 [2024-10-13 18:01:49.485486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.779 [2024-10-13 18:01:49.485510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:59.779 [2024-10-13 18:01:49.485522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:31:59.779 [2024-10-13 18:01:49.485531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.779 [2024-10-13 18:01:49.485640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.779 [2024-10-13 18:01:49.485652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:59.779 [2024-10-13 18:01:49.485662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:31:59.779 [2024-10-13 18:01:49.485671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.779 [2024-10-13 18:01:49.485696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.779 [2024-10-13 18:01:49.485713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:59.779 [2024-10-13 18:01:49.485722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:59.779 [2024-10-13 18:01:49.485730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.779 [2024-10-13 18:01:49.485758] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:59.779 [2024-10-13 18:01:49.490683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.779 [2024-10-13 18:01:49.490724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:59.779 [2024-10-13 18:01:49.490738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.932 ms 00:31:59.779 [2024-10-13 18:01:49.490746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.779 [2024-10-13 18:01:49.490783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.779 [2024-10-13 18:01:49.490792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:59.779 [2024-10-13 18:01:49.490801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:31:59.779 [2024-10-13 18:01:49.490808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.779 [2024-10-13 18:01:49.490871] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:59.779 [2024-10-13 18:01:49.490902] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:59.779 [2024-10-13 18:01:49.490940] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:59.779 [2024-10-13 18:01:49.490960] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:59.779 [2024-10-13 18:01:49.491073] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:59.779 [2024-10-13 18:01:49.491088] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:59.779 [2024-10-13 18:01:49.491099] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:59.779 [2024-10-13 18:01:49.491110] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:59.779 [2024-10-13 18:01:49.491121] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:59.779 [2024-10-13 18:01:49.491129] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:59.779 [2024-10-13 18:01:49.491138] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:59.779 [2024-10-13 18:01:49.491148] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:59.779 [2024-10-13 18:01:49.491156] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:59.779 [2024-10-13 18:01:49.491165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.779 [2024-10-13 18:01:49.491173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:59.779 [2024-10-13 18:01:49.491180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:31:59.779 [2024-10-13 18:01:49.491188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.779 [2024-10-13 18:01:49.491285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.779 [2024-10-13 18:01:49.491294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:59.779 [2024-10-13 18:01:49.491302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:31:59.779 [2024-10-13 18:01:49.491309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.779 [2024-10-13 18:01:49.491423] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:59.779 [2024-10-13 18:01:49.491477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:59.779 [2024-10-13 18:01:49.491486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:59.779 [2024-10-13 18:01:49.491494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:59.779 [2024-10-13 18:01:49.491503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:59.779 [2024-10-13 18:01:49.491510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:59.779 [2024-10-13 18:01:49.491518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:59.779 [2024-10-13 18:01:49.491526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:59.779 [2024-10-13 18:01:49.491534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:59.779 [2024-10-13 18:01:49.491542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:59.779 [2024-10-13 18:01:49.491549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:59.779 [2024-10-13 18:01:49.491575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:59.779 [2024-10-13 18:01:49.491584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:59.779 [2024-10-13 18:01:49.491592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:59.779 [2024-10-13 18:01:49.491599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:59.779 [2024-10-13 18:01:49.491606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:59.779 [2024-10-13 18:01:49.491614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:59.779 [2024-10-13 18:01:49.491628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:59.779 [2024-10-13 18:01:49.491635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:59.779 [2024-10-13 18:01:49.491642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:59.779 [2024-10-13 18:01:49.491649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:59.779 [2024-10-13 18:01:49.491656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:59.779 [2024-10-13 18:01:49.491663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:59.779 [2024-10-13 18:01:49.491670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:59.779 [2024-10-13 18:01:49.491680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:59.779 [2024-10-13 18:01:49.491687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:59.779 [2024-10-13 18:01:49.491693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:59.779 [2024-10-13 18:01:49.491701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:59.779 [2024-10-13 18:01:49.491708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:59.779 [2024-10-13 18:01:49.491715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:59.779 [2024-10-13 18:01:49.491723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:59.779 [2024-10-13 18:01:49.491730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:59.779 [2024-10-13 18:01:49.491738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:59.779 [2024-10-13 18:01:49.491745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:59.779 [2024-10-13 18:01:49.491757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:59.779 [2024-10-13 18:01:49.491765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:59.779 [2024-10-13 18:01:49.491771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:59.779 [2024-10-13 18:01:49.491782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:59.779 [2024-10-13 18:01:49.491789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:59.779 [2024-10-13 18:01:49.491800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:59.779 [2024-10-13 18:01:49.491811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:59.779 [2024-10-13 18:01:49.491820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:59.779 [2024-10-13 18:01:49.491831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:59.779 [2024-10-13 18:01:49.491844] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:59.779 [2024-10-13 18:01:49.491858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:59.779 [2024-10-13 18:01:49.491867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:59.779 [2024-10-13 18:01:49.491879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:59.779 [2024-10-13 18:01:49.491891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:59.779 [2024-10-13 18:01:49.491902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:59.779 [2024-10-13 18:01:49.491909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:59.780 [2024-10-13 18:01:49.491916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:59.780 [2024-10-13 18:01:49.491927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:59.780 [2024-10-13 18:01:49.491934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:59.780 [2024-10-13 18:01:49.491943] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:59.780 [2024-10-13 18:01:49.491957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:59.780 [2024-10-13 18:01:49.491973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:59.780 [2024-10-13 18:01:49.491985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:59.780 [2024-10-13 18:01:49.491992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:59.780 [2024-10-13 18:01:49.491999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:59.780 [2024-10-13 18:01:49.492010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:59.780 [2024-10-13 18:01:49.492017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:59.780 [2024-10-13 18:01:49.492025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:59.780 [2024-10-13 18:01:49.492032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:59.780 [2024-10-13 18:01:49.492045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:59.780 [2024-10-13 18:01:49.492053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:59.780 [2024-10-13 18:01:49.492066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:59.780 [2024-10-13 18:01:49.492075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:59.780 [2024-10-13 18:01:49.492094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:59.780 [2024-10-13 18:01:49.492101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:59.780 [2024-10-13 18:01:49.492116] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:59.780 [2024-10-13 18:01:49.492125] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:59.780 [2024-10-13 18:01:49.492135] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:59.780 [2024-10-13 18:01:49.492143] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:59.780 [2024-10-13 18:01:49.492151] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:59.780 [2024-10-13 18:01:49.492159] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:59.780 [2024-10-13 18:01:49.492169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.780 [2024-10-13 18:01:49.492180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:59.780 [2024-10-13 18:01:49.492188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.823 ms 00:31:59.780 [2024-10-13 18:01:49.492197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.780 [2024-10-13 18:01:49.524005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.780 [2024-10-13 18:01:49.524052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:59.780 [2024-10-13 18:01:49.524065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.762 ms 00:31:59.780 [2024-10-13 18:01:49.524074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.780 [2024-10-13 18:01:49.524161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.780 [2024-10-13 18:01:49.524172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:59.780 [2024-10-13 18:01:49.524182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:31:59.780 [2024-10-13 18:01:49.524191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.780 [2024-10-13 18:01:49.571169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.780 [2024-10-13 18:01:49.571230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:59.780 [2024-10-13 18:01:49.571244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.918 ms 00:31:59.780 [2024-10-13 18:01:49.571253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.780 [2024-10-13 18:01:49.571303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.780 [2024-10-13 18:01:49.571314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:59.780 [2024-10-13 18:01:49.571328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:59.780 [2024-10-13 18:01:49.571338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.780 [2024-10-13 18:01:49.571468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.780 [2024-10-13 18:01:49.571482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:59.780 [2024-10-13 18:01:49.571492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:31:59.780 [2024-10-13 18:01:49.571501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.780 [2024-10-13 18:01:49.571667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.780 [2024-10-13 18:01:49.571679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:59.780 [2024-10-13 18:01:49.571690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:31:59.780 [2024-10-13 18:01:49.571703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.780 [2024-10-13 18:01:49.589799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.780 [2024-10-13 18:01:49.589847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:59.780 [2024-10-13 18:01:49.589863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.072 ms 00:31:59.780 [2024-10-13 18:01:49.589872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.780 [2024-10-13 18:01:49.590042] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:31:59.780 [2024-10-13 18:01:49.590059] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:00.043 [2024-10-13 18:01:49.590070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.043 [2024-10-13 18:01:49.590079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:00.043 [2024-10-13 18:01:49.590088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:32:00.043 [2024-10-13 18:01:49.590100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.043 [2024-10-13 18:01:49.602577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.043 [2024-10-13 18:01:49.602624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:00.043 [2024-10-13 18:01:49.602636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.455 ms 00:32:00.043 [2024-10-13 18:01:49.602645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.043 [2024-10-13 18:01:49.602787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.043 [2024-10-13 18:01:49.602799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:00.043 [2024-10-13 18:01:49.602809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:32:00.043 [2024-10-13 18:01:49.602817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.043 [2024-10-13 18:01:49.602868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.043 [2024-10-13 18:01:49.602885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:00.043 [2024-10-13 18:01:49.602894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:00.043 [2024-10-13 18:01:49.602902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.043 [2024-10-13 18:01:49.603620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.043 [2024-10-13 18:01:49.603643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:00.043 [2024-10-13 18:01:49.603654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.675 ms 00:32:00.043 [2024-10-13 18:01:49.603663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.043 [2024-10-13 18:01:49.603685] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:00.043 [2024-10-13 18:01:49.603697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.043 [2024-10-13 18:01:49.603709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:00.043 [2024-10-13 18:01:49.603735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:32:00.043 [2024-10-13 18:01:49.603744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.043 [2024-10-13 18:01:49.617869] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:00.043 [2024-10-13 18:01:49.618045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.043 [2024-10-13 18:01:49.618058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:00.043 [2024-10-13 18:01:49.618068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.281 ms 00:32:00.043 [2024-10-13 18:01:49.618078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.043 [2024-10-13 18:01:49.620456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.043 [2024-10-13 18:01:49.620494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:00.043 [2024-10-13 18:01:49.620510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.354 ms 00:32:00.043 [2024-10-13 18:01:49.620517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.043 [2024-10-13 18:01:49.620620] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:32:00.043 [2024-10-13 18:01:49.620932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.043 [2024-10-13 18:01:49.620951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:00.043 [2024-10-13 18:01:49.620961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:32:00.043 [2024-10-13 18:01:49.620969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.043 [2024-10-13 18:01:49.620997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.043 [2024-10-13 18:01:49.621007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:00.043 [2024-10-13 18:01:49.621020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:00.043 [2024-10-13 18:01:49.621029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.043 [2024-10-13 18:01:49.621069] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:00.043 [2024-10-13 18:01:49.621080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.043 [2024-10-13 18:01:49.621089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:00.043 [2024-10-13 18:01:49.621098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:32:00.043 [2024-10-13 18:01:49.621106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.043 [2024-10-13 18:01:49.648773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.043 [2024-10-13 18:01:49.648831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:00.043 [2024-10-13 18:01:49.648845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.648 ms 00:32:00.043 [2024-10-13 18:01:49.648854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.043 [2024-10-13 18:01:49.648947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.043 [2024-10-13 18:01:49.648959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:00.043 [2024-10-13 18:01:49.648968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:32:00.043 [2024-10-13 18:01:49.648977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.043 [2024-10-13 18:01:49.650328] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 167.050 ms, result 0 00:32:01.432  [2024-10-13T18:01:52.192Z] Copying: 10/1024 [MB] (10 MBps) [2024-10-13T18:01:53.137Z] Copying: 27/1024 [MB] (16 MBps) [2024-10-13T18:01:54.085Z] Copying: 48/1024 [MB] (20 MBps) [2024-10-13T18:01:55.068Z] Copying: 69/1024 [MB] (21 MBps) [2024-10-13T18:01:56.013Z] Copying: 90/1024 [MB] (21 MBps) [2024-10-13T18:01:56.957Z] Copying: 109/1024 [MB] (18 MBps) [2024-10-13T18:01:57.996Z] Copying: 127/1024 [MB] (17 MBps) [2024-10-13T18:01:58.938Z] Copying: 145/1024 [MB] (18 MBps) [2024-10-13T18:01:59.883Z] Copying: 168/1024 [MB] (22 MBps) [2024-10-13T18:02:01.271Z] Copying: 186/1024 [MB] (17 MBps) [2024-10-13T18:02:02.215Z] Copying: 201/1024 [MB] (14 MBps) [2024-10-13T18:02:03.156Z] Copying: 214/1024 [MB] (13 MBps) [2024-10-13T18:02:04.100Z] Copying: 231/1024 [MB] (17 MBps) [2024-10-13T18:02:05.042Z] Copying: 252/1024 [MB] (20 MBps) [2024-10-13T18:02:05.989Z] Copying: 273/1024 [MB] (20 MBps) [2024-10-13T18:02:06.934Z] Copying: 288/1024 [MB] (15 MBps) [2024-10-13T18:02:07.879Z] Copying: 305/1024 [MB] (17 MBps) [2024-10-13T18:02:09.264Z] Copying: 319/1024 [MB] (13 MBps) [2024-10-13T18:02:10.208Z] Copying: 343/1024 [MB] (23 MBps) [2024-10-13T18:02:11.152Z] Copying: 359/1024 [MB] (16 MBps) [2024-10-13T18:02:12.096Z] Copying: 378/1024 [MB] (18 MBps) [2024-10-13T18:02:13.042Z] Copying: 390/1024 [MB] (12 MBps) [2024-10-13T18:02:13.988Z] Copying: 405/1024 [MB] (14 MBps) [2024-10-13T18:02:14.932Z] Copying: 416/1024 [MB] (10 MBps) [2024-10-13T18:02:15.878Z] Copying: 437/1024 [MB] (21 MBps) [2024-10-13T18:02:17.267Z] Copying: 455/1024 [MB] (18 MBps) [2024-10-13T18:02:18.211Z] Copying: 469/1024 [MB] (13 MBps) [2024-10-13T18:02:19.157Z] Copying: 487/1024 [MB] (17 MBps) [2024-10-13T18:02:20.102Z] Copying: 507/1024 [MB] (20 MBps) [2024-10-13T18:02:21.046Z] Copying: 527/1024 [MB] (20 MBps) [2024-10-13T18:02:21.991Z] Copying: 544/1024 [MB] (17 MBps) [2024-10-13T18:02:22.934Z] Copying: 562/1024 [MB] (18 MBps) [2024-10-13T18:02:23.877Z] Copying: 573/1024 [MB] (10 MBps) [2024-10-13T18:02:25.263Z] Copying: 584/1024 [MB] (10 MBps) [2024-10-13T18:02:26.205Z] Copying: 594/1024 [MB] (10 MBps) [2024-10-13T18:02:27.149Z] Copying: 605/1024 [MB] (10 MBps) [2024-10-13T18:02:28.093Z] Copying: 617/1024 [MB] (12 MBps) [2024-10-13T18:02:29.105Z] Copying: 627/1024 [MB] (10 MBps) [2024-10-13T18:02:30.049Z] Copying: 638/1024 [MB] (10 MBps) [2024-10-13T18:02:30.994Z] Copying: 649/1024 [MB] (10 MBps) [2024-10-13T18:02:31.939Z] Copying: 659/1024 [MB] (10 MBps) [2024-10-13T18:02:32.883Z] Copying: 669/1024 [MB] (10 MBps) [2024-10-13T18:02:34.270Z] Copying: 680/1024 [MB] (10 MBps) [2024-10-13T18:02:35.213Z] Copying: 700/1024 [MB] (20 MBps) [2024-10-13T18:02:36.157Z] Copying: 722/1024 [MB] (21 MBps) [2024-10-13T18:02:37.101Z] Copying: 744/1024 [MB] (22 MBps) [2024-10-13T18:02:38.102Z] Copying: 755/1024 [MB] (10 MBps) [2024-10-13T18:02:39.049Z] Copying: 771/1024 [MB] (16 MBps) [2024-10-13T18:02:39.995Z] Copying: 787/1024 [MB] (16 MBps) [2024-10-13T18:02:40.938Z] Copying: 801/1024 [MB] (13 MBps) [2024-10-13T18:02:41.882Z] Copying: 816/1024 [MB] (14 MBps) [2024-10-13T18:02:43.269Z] Copying: 832/1024 [MB] (16 MBps) [2024-10-13T18:02:43.879Z] Copying: 843/1024 [MB] (10 MBps) [2024-10-13T18:02:45.291Z] Copying: 854/1024 [MB] (10 MBps) [2024-10-13T18:02:45.864Z] Copying: 873/1024 [MB] (19 MBps) [2024-10-13T18:02:47.254Z] Copying: 889/1024 [MB] (16 MBps) [2024-10-13T18:02:48.198Z] Copying: 906/1024 [MB] (17 MBps) [2024-10-13T18:02:49.143Z] Copying: 920/1024 [MB] (14 MBps) [2024-10-13T18:02:50.089Z] Copying: 940/1024 [MB] (19 MBps) [2024-10-13T18:02:51.034Z] Copying: 962/1024 [MB] (21 MBps) [2024-10-13T18:02:51.980Z] Copying: 981/1024 [MB] (19 MBps) [2024-10-13T18:02:52.924Z] Copying: 997/1024 [MB] (15 MBps) [2024-10-13T18:02:53.497Z] Copying: 1015/1024 [MB] (17 MBps) [2024-10-13T18:02:53.497Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-10-13 18:02:53.363998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:03.683 [2024-10-13 18:02:53.364081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:03.684 [2024-10-13 18:02:53.364099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:03.684 [2024-10-13 18:02:53.364109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.684 [2024-10-13 18:02:53.364133] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:03.684 [2024-10-13 18:02:53.368006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:03.684 [2024-10-13 18:02:53.368050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:03.684 [2024-10-13 18:02:53.368063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.855 ms 00:33:03.684 [2024-10-13 18:02:53.368071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.684 [2024-10-13 18:02:53.368479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:03.684 [2024-10-13 18:02:53.368498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:03.684 [2024-10-13 18:02:53.368514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:33:03.684 [2024-10-13 18:02:53.368524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.684 [2024-10-13 18:02:53.368572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:03.684 [2024-10-13 18:02:53.368583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:03.684 [2024-10-13 18:02:53.368592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:33:03.684 [2024-10-13 18:02:53.368601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.684 [2024-10-13 18:02:53.368666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:03.684 [2024-10-13 18:02:53.368681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:03.684 [2024-10-13 18:02:53.368690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:33:03.684 [2024-10-13 18:02:53.368701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.684 [2024-10-13 18:02:53.368716] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:03.684 [2024-10-13 18:02:53.368731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:33:03.684 [2024-10-13 18:02:53.368742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.368750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.368758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.368765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.368773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.368781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.368789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.368796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.368804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.368812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.368820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.368828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.368836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.368843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.368850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.368858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.368869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.368877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.368885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.368893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.368901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.368909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.368917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.368924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.368932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.368940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.368947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.368956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.368963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.368970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.368978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.368985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.368993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:03.684 [2024-10-13 18:02:53.369326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:03.685 [2024-10-13 18:02:53.369333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:03.685 [2024-10-13 18:02:53.369341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:03.685 [2024-10-13 18:02:53.369349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:03.685 [2024-10-13 18:02:53.369356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:03.685 [2024-10-13 18:02:53.369363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:03.685 [2024-10-13 18:02:53.369372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:03.685 [2024-10-13 18:02:53.369380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:03.685 [2024-10-13 18:02:53.369388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:03.685 [2024-10-13 18:02:53.369397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:03.685 [2024-10-13 18:02:53.369405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:03.685 [2024-10-13 18:02:53.369412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:03.685 [2024-10-13 18:02:53.369420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:03.685 [2024-10-13 18:02:53.369427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:03.685 [2024-10-13 18:02:53.369435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:03.685 [2024-10-13 18:02:53.369442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:03.685 [2024-10-13 18:02:53.369449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:03.685 [2024-10-13 18:02:53.369457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:03.685 [2024-10-13 18:02:53.369464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:03.685 [2024-10-13 18:02:53.369472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:03.685 [2024-10-13 18:02:53.369479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:03.685 [2024-10-13 18:02:53.369487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:03.685 [2024-10-13 18:02:53.369495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:03.685 [2024-10-13 18:02:53.369502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:03.685 [2024-10-13 18:02:53.369510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:03.685 [2024-10-13 18:02:53.369526] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:03.685 [2024-10-13 18:02:53.369534] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9e25fb9a-cdcc-44c5-8f80-87d7525e4ba6 00:33:03.685 [2024-10-13 18:02:53.369542] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:33:03.685 [2024-10-13 18:02:53.369549] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 50976 00:33:03.685 [2024-10-13 18:02:53.369569] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 50944 00:33:03.685 [2024-10-13 18:02:53.369578] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0006 00:33:03.685 [2024-10-13 18:02:53.369586] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:03.685 [2024-10-13 18:02:53.369594] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:03.685 [2024-10-13 18:02:53.369605] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:03.685 [2024-10-13 18:02:53.369612] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:03.685 [2024-10-13 18:02:53.369618] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:03.685 [2024-10-13 18:02:53.369625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:03.685 [2024-10-13 18:02:53.369633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:03.685 [2024-10-13 18:02:53.369643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.910 ms 00:33:03.685 [2024-10-13 18:02:53.369651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.685 [2024-10-13 18:02:53.384977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:03.685 [2024-10-13 18:02:53.385017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:03.685 [2024-10-13 18:02:53.385029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.308 ms 00:33:03.685 [2024-10-13 18:02:53.385038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.685 [2024-10-13 18:02:53.385462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:03.685 [2024-10-13 18:02:53.385481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:03.685 [2024-10-13 18:02:53.385492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:33:03.685 [2024-10-13 18:02:53.385499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.685 [2024-10-13 18:02:53.424579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:03.685 [2024-10-13 18:02:53.424624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:03.685 [2024-10-13 18:02:53.424642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:03.685 [2024-10-13 18:02:53.424651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.685 [2024-10-13 18:02:53.424727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:03.685 [2024-10-13 18:02:53.424738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:03.685 [2024-10-13 18:02:53.424748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:03.685 [2024-10-13 18:02:53.424757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.685 [2024-10-13 18:02:53.424818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:03.685 [2024-10-13 18:02:53.424829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:03.685 [2024-10-13 18:02:53.424839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:03.685 [2024-10-13 18:02:53.424851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.685 [2024-10-13 18:02:53.424868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:03.685 [2024-10-13 18:02:53.424877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:03.685 [2024-10-13 18:02:53.424886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:03.685 [2024-10-13 18:02:53.424894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.947 [2024-10-13 18:02:53.515400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:03.947 [2024-10-13 18:02:53.515465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:03.947 [2024-10-13 18:02:53.515478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:03.947 [2024-10-13 18:02:53.515494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.947 [2024-10-13 18:02:53.580732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:03.947 [2024-10-13 18:02:53.580785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:03.947 [2024-10-13 18:02:53.580805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:03.947 [2024-10-13 18:02:53.580813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.947 [2024-10-13 18:02:53.580910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:03.947 [2024-10-13 18:02:53.580920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:03.947 [2024-10-13 18:02:53.580928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:03.947 [2024-10-13 18:02:53.580936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.947 [2024-10-13 18:02:53.580976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:03.947 [2024-10-13 18:02:53.580984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:03.947 [2024-10-13 18:02:53.580992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:03.947 [2024-10-13 18:02:53.580999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.947 [2024-10-13 18:02:53.581071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:03.947 [2024-10-13 18:02:53.581079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:03.947 [2024-10-13 18:02:53.581086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:03.947 [2024-10-13 18:02:53.581093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.947 [2024-10-13 18:02:53.581117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:03.947 [2024-10-13 18:02:53.581128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:03.947 [2024-10-13 18:02:53.581134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:03.947 [2024-10-13 18:02:53.581140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.947 [2024-10-13 18:02:53.581185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:03.947 [2024-10-13 18:02:53.581192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:03.947 [2024-10-13 18:02:53.581199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:03.947 [2024-10-13 18:02:53.581206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.947 [2024-10-13 18:02:53.581259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:03.947 [2024-10-13 18:02:53.581268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:03.947 [2024-10-13 18:02:53.581275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:03.947 [2024-10-13 18:02:53.581282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.947 [2024-10-13 18:02:53.581416] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 217.386 ms, result 0 00:33:04.519 00:33:04.519 00:33:04.519 18:02:54 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:06.435 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:06.435 18:02:56 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:33:06.435 18:02:56 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:33:06.435 18:02:56 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:33:06.697 18:02:56 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:06.697 18:02:56 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:06.697 Process with pid 81878 is not found 00:33:06.697 Remove shared memory files 00:33:06.697 18:02:56 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 81878 00:33:06.697 18:02:56 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 81878 ']' 00:33:06.697 18:02:56 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 81878 00:33:06.697 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (81878) - No such process 00:33:06.697 18:02:56 ftl.ftl_restore_fast -- common/autotest_common.sh@977 -- # echo 'Process with pid 81878 is not found' 00:33:06.697 18:02:56 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:33:06.697 18:02:56 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:06.698 18:02:56 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:33:06.698 18:02:56 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_9e25fb9a-cdcc-44c5-8f80-87d7525e4ba6_band_md /dev/hugepages/ftl_9e25fb9a-cdcc-44c5-8f80-87d7525e4ba6_l2p_l1 /dev/hugepages/ftl_9e25fb9a-cdcc-44c5-8f80-87d7525e4ba6_l2p_l2 /dev/hugepages/ftl_9e25fb9a-cdcc-44c5-8f80-87d7525e4ba6_l2p_l2_ctx /dev/hugepages/ftl_9e25fb9a-cdcc-44c5-8f80-87d7525e4ba6_nvc_md /dev/hugepages/ftl_9e25fb9a-cdcc-44c5-8f80-87d7525e4ba6_p2l_pool /dev/hugepages/ftl_9e25fb9a-cdcc-44c5-8f80-87d7525e4ba6_sb /dev/hugepages/ftl_9e25fb9a-cdcc-44c5-8f80-87d7525e4ba6_sb_shm /dev/hugepages/ftl_9e25fb9a-cdcc-44c5-8f80-87d7525e4ba6_trim_bitmap /dev/hugepages/ftl_9e25fb9a-cdcc-44c5-8f80-87d7525e4ba6_trim_log /dev/hugepages/ftl_9e25fb9a-cdcc-44c5-8f80-87d7525e4ba6_trim_md /dev/hugepages/ftl_9e25fb9a-cdcc-44c5-8f80-87d7525e4ba6_vmap 00:33:06.698 18:02:56 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:33:06.698 18:02:56 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:06.698 18:02:56 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:33:06.698 00:33:06.698 real 4m55.301s 00:33:06.698 user 4m42.442s 00:33:06.698 sys 0m12.556s 00:33:06.698 18:02:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:06.698 18:02:56 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:33:06.698 ************************************ 00:33:06.698 END TEST ftl_restore_fast 00:33:06.698 ************************************ 00:33:06.698 18:02:56 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:33:06.698 18:02:56 ftl -- ftl/ftl.sh@14 -- # killprocess 72665 00:33:06.698 18:02:56 ftl -- common/autotest_common.sh@950 -- # '[' -z 72665 ']' 00:33:06.698 18:02:56 ftl -- common/autotest_common.sh@954 -- # kill -0 72665 00:33:06.698 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (72665) - No such process 00:33:06.698 Process with pid 72665 is not found 00:33:06.698 18:02:56 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 72665 is not found' 00:33:06.698 18:02:56 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:33:06.698 18:02:56 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84891 00:33:06.698 18:02:56 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84891 00:33:06.698 18:02:56 ftl -- common/autotest_common.sh@831 -- # '[' -z 84891 ']' 00:33:06.698 18:02:56 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:06.698 18:02:56 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:06.698 18:02:56 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:06.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:06.698 18:02:56 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:06.698 18:02:56 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:06.698 18:02:56 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:06.698 [2024-10-13 18:02:56.452219] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:33:06.698 [2024-10-13 18:02:56.452332] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84891 ] 00:33:06.960 [2024-10-13 18:02:56.596670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:06.960 [2024-10-13 18:02:56.690328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:07.534 18:02:57 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:07.534 18:02:57 ftl -- common/autotest_common.sh@864 -- # return 0 00:33:07.534 18:02:57 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:07.798 nvme0n1 00:33:07.798 18:02:57 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:33:07.798 18:02:57 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:07.798 18:02:57 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:08.059 18:02:57 ftl -- ftl/common.sh@28 -- # stores=46e0ed76-29a7-4695-add2-33ba81c8546f 00:33:08.059 18:02:57 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:33:08.059 18:02:57 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 46e0ed76-29a7-4695-add2-33ba81c8546f 00:33:08.320 18:02:57 ftl -- ftl/ftl.sh@23 -- # killprocess 84891 00:33:08.320 18:02:57 ftl -- common/autotest_common.sh@950 -- # '[' -z 84891 ']' 00:33:08.320 18:02:57 ftl -- common/autotest_common.sh@954 -- # kill -0 84891 00:33:08.320 18:02:57 ftl -- common/autotest_common.sh@955 -- # uname 00:33:08.320 18:02:57 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:08.320 18:02:57 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84891 00:33:08.320 18:02:58 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:08.320 18:02:58 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:08.320 killing process with pid 84891 00:33:08.320 18:02:58 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84891' 00:33:08.320 18:02:58 ftl -- common/autotest_common.sh@969 -- # kill 84891 00:33:08.320 18:02:58 ftl -- common/autotest_common.sh@974 -- # wait 84891 00:33:09.706 18:02:59 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:09.706 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:09.706 Waiting for block devices as requested 00:33:09.706 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:09.968 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:09.968 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:33:10.229 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:33:15.522 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:33:15.522 Remove shared memory files 00:33:15.522 18:03:04 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:33:15.522 18:03:04 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:15.522 18:03:04 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:33:15.522 18:03:04 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:33:15.522 18:03:04 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:33:15.522 18:03:04 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:15.522 18:03:04 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:33:15.522 ************************************ 00:33:15.522 END TEST ftl 00:33:15.522 ************************************ 00:33:15.522 00:33:15.522 real 18m25.840s 00:33:15.522 user 21m3.673s 00:33:15.522 sys 1m27.867s 00:33:15.522 18:03:04 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:15.522 18:03:04 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:15.522 18:03:04 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:33:15.522 18:03:04 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:33:15.522 18:03:04 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:33:15.522 18:03:04 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:33:15.522 18:03:04 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:33:15.522 18:03:04 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:33:15.522 18:03:04 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:33:15.522 18:03:04 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:33:15.522 18:03:04 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:33:15.522 18:03:04 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:33:15.522 18:03:04 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:15.522 18:03:04 -- common/autotest_common.sh@10 -- # set +x 00:33:15.522 18:03:04 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:33:15.522 18:03:04 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:33:15.522 18:03:04 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:33:15.522 18:03:04 -- common/autotest_common.sh@10 -- # set +x 00:33:16.908 INFO: APP EXITING 00:33:16.908 INFO: killing all VMs 00:33:16.908 INFO: killing vhost app 00:33:16.908 INFO: EXIT DONE 00:33:17.169 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:17.429 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:33:17.429 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:33:17.690 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:33:17.690 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:33:17.952 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:18.237 Cleaning 00:33:18.237 Removing: /var/run/dpdk/spdk0/config 00:33:18.237 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:18.237 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:18.504 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:18.504 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:18.504 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:18.504 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:18.504 Removing: /var/run/dpdk/spdk0 00:33:18.504 Removing: /var/run/dpdk/spdk_pid57346 00:33:18.504 Removing: /var/run/dpdk/spdk_pid57548 00:33:18.504 Removing: /var/run/dpdk/spdk_pid57755 00:33:18.504 Removing: /var/run/dpdk/spdk_pid57848 00:33:18.504 Removing: /var/run/dpdk/spdk_pid57889 00:33:18.504 Removing: /var/run/dpdk/spdk_pid58010 00:33:18.504 Removing: /var/run/dpdk/spdk_pid58028 00:33:18.504 Removing: /var/run/dpdk/spdk_pid58222 00:33:18.504 Removing: /var/run/dpdk/spdk_pid58315 00:33:18.504 Removing: /var/run/dpdk/spdk_pid58405 00:33:18.504 Removing: /var/run/dpdk/spdk_pid58511 00:33:18.504 Removing: /var/run/dpdk/spdk_pid58608 00:33:18.504 Removing: /var/run/dpdk/spdk_pid58642 00:33:18.504 Removing: /var/run/dpdk/spdk_pid58683 00:33:18.504 Removing: /var/run/dpdk/spdk_pid58749 00:33:18.504 Removing: /var/run/dpdk/spdk_pid58833 00:33:18.504 Removing: /var/run/dpdk/spdk_pid59269 00:33:18.504 Removing: /var/run/dpdk/spdk_pid59322 00:33:18.504 Removing: /var/run/dpdk/spdk_pid59385 00:33:18.504 Removing: /var/run/dpdk/spdk_pid59390 00:33:18.504 Removing: /var/run/dpdk/spdk_pid59492 00:33:18.504 Removing: /var/run/dpdk/spdk_pid59502 00:33:18.504 Removing: /var/run/dpdk/spdk_pid59599 00:33:18.504 Removing: /var/run/dpdk/spdk_pid59615 00:33:18.504 Removing: /var/run/dpdk/spdk_pid59668 00:33:18.504 Removing: /var/run/dpdk/spdk_pid59686 00:33:18.504 Removing: /var/run/dpdk/spdk_pid59739 00:33:18.504 Removing: /var/run/dpdk/spdk_pid59757 00:33:18.504 Removing: /var/run/dpdk/spdk_pid59906 00:33:18.504 Removing: /var/run/dpdk/spdk_pid59943 00:33:18.504 Removing: /var/run/dpdk/spdk_pid60026 00:33:18.504 Removing: /var/run/dpdk/spdk_pid60198 00:33:18.504 Removing: /var/run/dpdk/spdk_pid60277 00:33:18.504 Removing: /var/run/dpdk/spdk_pid60313 00:33:18.504 Removing: /var/run/dpdk/spdk_pid60741 00:33:18.504 Removing: /var/run/dpdk/spdk_pid60839 00:33:18.505 Removing: /var/run/dpdk/spdk_pid60962 00:33:18.505 Removing: /var/run/dpdk/spdk_pid61015 00:33:18.505 Removing: /var/run/dpdk/spdk_pid61035 00:33:18.505 Removing: /var/run/dpdk/spdk_pid61119 00:33:18.505 Removing: /var/run/dpdk/spdk_pid61737 00:33:18.505 Removing: /var/run/dpdk/spdk_pid61768 00:33:18.505 Removing: /var/run/dpdk/spdk_pid62236 00:33:18.505 Removing: /var/run/dpdk/spdk_pid62330 00:33:18.505 Removing: /var/run/dpdk/spdk_pid62450 00:33:18.505 Removing: /var/run/dpdk/spdk_pid62503 00:33:18.505 Removing: /var/run/dpdk/spdk_pid62529 00:33:18.505 Removing: /var/run/dpdk/spdk_pid62554 00:33:18.505 Removing: /var/run/dpdk/spdk_pid64408 00:33:18.505 Removing: /var/run/dpdk/spdk_pid64545 00:33:18.505 Removing: /var/run/dpdk/spdk_pid64549 00:33:18.505 Removing: /var/run/dpdk/spdk_pid64566 00:33:18.505 Removing: /var/run/dpdk/spdk_pid64605 00:33:18.505 Removing: /var/run/dpdk/spdk_pid64609 00:33:18.505 Removing: /var/run/dpdk/spdk_pid64621 00:33:18.505 Removing: /var/run/dpdk/spdk_pid64666 00:33:18.505 Removing: /var/run/dpdk/spdk_pid64670 00:33:18.505 Removing: /var/run/dpdk/spdk_pid64682 00:33:18.505 Removing: /var/run/dpdk/spdk_pid64727 00:33:18.505 Removing: /var/run/dpdk/spdk_pid64731 00:33:18.505 Removing: /var/run/dpdk/spdk_pid64743 00:33:18.505 Removing: /var/run/dpdk/spdk_pid66113 00:33:18.505 Removing: /var/run/dpdk/spdk_pid66211 00:33:18.505 Removing: /var/run/dpdk/spdk_pid67619 00:33:18.505 Removing: /var/run/dpdk/spdk_pid68999 00:33:18.505 Removing: /var/run/dpdk/spdk_pid69085 00:33:18.505 Removing: /var/run/dpdk/spdk_pid69172 00:33:18.505 Removing: /var/run/dpdk/spdk_pid69248 00:33:18.505 Removing: /var/run/dpdk/spdk_pid69348 00:33:18.505 Removing: /var/run/dpdk/spdk_pid69417 00:33:18.505 Removing: /var/run/dpdk/spdk_pid69566 00:33:18.505 Removing: /var/run/dpdk/spdk_pid69925 00:33:18.505 Removing: /var/run/dpdk/spdk_pid69956 00:33:18.505 Removing: /var/run/dpdk/spdk_pid70408 00:33:18.505 Removing: /var/run/dpdk/spdk_pid70595 00:33:18.505 Removing: /var/run/dpdk/spdk_pid70693 00:33:18.505 Removing: /var/run/dpdk/spdk_pid70808 00:33:18.505 Removing: /var/run/dpdk/spdk_pid70858 00:33:18.505 Removing: /var/run/dpdk/spdk_pid70889 00:33:18.505 Removing: /var/run/dpdk/spdk_pid71189 00:33:18.505 Removing: /var/run/dpdk/spdk_pid71251 00:33:18.505 Removing: /var/run/dpdk/spdk_pid71325 00:33:18.505 Removing: /var/run/dpdk/spdk_pid71714 00:33:18.505 Removing: /var/run/dpdk/spdk_pid71860 00:33:18.505 Removing: /var/run/dpdk/spdk_pid72665 00:33:18.505 Removing: /var/run/dpdk/spdk_pid72797 00:33:18.505 Removing: /var/run/dpdk/spdk_pid72962 00:33:18.505 Removing: /var/run/dpdk/spdk_pid73055 00:33:18.505 Removing: /var/run/dpdk/spdk_pid73422 00:33:18.505 Removing: /var/run/dpdk/spdk_pid73711 00:33:18.505 Removing: /var/run/dpdk/spdk_pid74087 00:33:18.766 Removing: /var/run/dpdk/spdk_pid74274 00:33:18.766 Removing: /var/run/dpdk/spdk_pid74427 00:33:18.766 Removing: /var/run/dpdk/spdk_pid74480 00:33:18.766 Removing: /var/run/dpdk/spdk_pid74656 00:33:18.766 Removing: /var/run/dpdk/spdk_pid74681 00:33:18.766 Removing: /var/run/dpdk/spdk_pid74734 00:33:18.766 Removing: /var/run/dpdk/spdk_pid74997 00:33:18.766 Removing: /var/run/dpdk/spdk_pid75233 00:33:18.766 Removing: /var/run/dpdk/spdk_pid75926 00:33:18.766 Removing: /var/run/dpdk/spdk_pid76589 00:33:18.766 Removing: /var/run/dpdk/spdk_pid77282 00:33:18.766 Removing: /var/run/dpdk/spdk_pid78043 00:33:18.766 Removing: /var/run/dpdk/spdk_pid78190 00:33:18.766 Removing: /var/run/dpdk/spdk_pid78277 00:33:18.766 Removing: /var/run/dpdk/spdk_pid78966 00:33:18.766 Removing: /var/run/dpdk/spdk_pid79025 00:33:18.766 Removing: /var/run/dpdk/spdk_pid79676 00:33:18.766 Removing: /var/run/dpdk/spdk_pid80119 00:33:18.766 Removing: /var/run/dpdk/spdk_pid80827 00:33:18.766 Removing: /var/run/dpdk/spdk_pid80960 00:33:18.766 Removing: /var/run/dpdk/spdk_pid81002 00:33:18.766 Removing: /var/run/dpdk/spdk_pid81060 00:33:18.766 Removing: /var/run/dpdk/spdk_pid81123 00:33:18.766 Removing: /var/run/dpdk/spdk_pid81176 00:33:18.766 Removing: /var/run/dpdk/spdk_pid81363 00:33:18.766 Removing: /var/run/dpdk/spdk_pid81456 00:33:18.766 Removing: /var/run/dpdk/spdk_pid81523 00:33:18.766 Removing: /var/run/dpdk/spdk_pid81634 00:33:18.766 Removing: /var/run/dpdk/spdk_pid81670 00:33:18.766 Removing: /var/run/dpdk/spdk_pid81728 00:33:18.766 Removing: /var/run/dpdk/spdk_pid81878 00:33:18.766 Removing: /var/run/dpdk/spdk_pid82105 00:33:18.766 Removing: /var/run/dpdk/spdk_pid82745 00:33:18.766 Removing: /var/run/dpdk/spdk_pid83493 00:33:18.766 Removing: /var/run/dpdk/spdk_pid84193 00:33:18.766 Removing: /var/run/dpdk/spdk_pid84891 00:33:18.766 Clean 00:33:18.766 18:03:08 -- common/autotest_common.sh@1451 -- # return 0 00:33:18.766 18:03:08 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:33:18.766 18:03:08 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:18.766 18:03:08 -- common/autotest_common.sh@10 -- # set +x 00:33:18.766 18:03:08 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:33:18.766 18:03:08 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:18.766 18:03:08 -- common/autotest_common.sh@10 -- # set +x 00:33:18.766 18:03:08 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:19.028 18:03:08 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:33:19.028 18:03:08 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:33:19.028 18:03:08 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:33:19.028 18:03:08 -- spdk/autotest.sh@394 -- # hostname 00:33:19.028 18:03:08 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:33:19.028 geninfo: WARNING: invalid characters removed from testname! 00:33:45.615 18:03:33 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:47.529 18:03:36 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:49.444 18:03:39 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:51.360 18:03:40 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:53.276 18:03:42 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:55.824 18:03:45 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:57.733 18:03:47 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:57.733 18:03:47 -- common/autotest_common.sh@1690 -- $ [[ y == y ]] 00:33:57.733 18:03:47 -- common/autotest_common.sh@1691 -- $ awk '{print $NF}' 00:33:57.733 18:03:47 -- common/autotest_common.sh@1691 -- $ lcov --version 00:33:57.995 18:03:47 -- common/autotest_common.sh@1691 -- $ lt 1.15 2 00:33:57.995 18:03:47 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:33:57.995 18:03:47 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:33:57.995 18:03:47 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:33:57.995 18:03:47 -- scripts/common.sh@336 -- $ IFS=.-: 00:33:57.995 18:03:47 -- scripts/common.sh@336 -- $ read -ra ver1 00:33:57.995 18:03:47 -- scripts/common.sh@337 -- $ IFS=.-: 00:33:57.995 18:03:47 -- scripts/common.sh@337 -- $ read -ra ver2 00:33:57.995 18:03:47 -- scripts/common.sh@338 -- $ local 'op=<' 00:33:57.995 18:03:47 -- scripts/common.sh@340 -- $ ver1_l=2 00:33:57.995 18:03:47 -- scripts/common.sh@341 -- $ ver2_l=1 00:33:57.995 18:03:47 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:33:57.995 18:03:47 -- scripts/common.sh@344 -- $ case "$op" in 00:33:57.995 18:03:47 -- scripts/common.sh@345 -- $ : 1 00:33:57.995 18:03:47 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:33:57.995 18:03:47 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:57.995 18:03:47 -- scripts/common.sh@365 -- $ decimal 1 00:33:57.995 18:03:47 -- scripts/common.sh@353 -- $ local d=1 00:33:57.995 18:03:47 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:33:57.995 18:03:47 -- scripts/common.sh@355 -- $ echo 1 00:33:57.995 18:03:47 -- scripts/common.sh@365 -- $ ver1[v]=1 00:33:57.995 18:03:47 -- scripts/common.sh@366 -- $ decimal 2 00:33:57.995 18:03:47 -- scripts/common.sh@353 -- $ local d=2 00:33:57.995 18:03:47 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:33:57.995 18:03:47 -- scripts/common.sh@355 -- $ echo 2 00:33:57.995 18:03:47 -- scripts/common.sh@366 -- $ ver2[v]=2 00:33:57.995 18:03:47 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:33:57.995 18:03:47 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:33:57.995 18:03:47 -- scripts/common.sh@368 -- $ return 0 00:33:57.995 18:03:47 -- common/autotest_common.sh@1692 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:57.995 18:03:47 -- common/autotest_common.sh@1704 -- $ export 'LCOV_OPTS= 00:33:57.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:57.995 --rc genhtml_branch_coverage=1 00:33:57.995 --rc genhtml_function_coverage=1 00:33:57.995 --rc genhtml_legend=1 00:33:57.995 --rc geninfo_all_blocks=1 00:33:57.995 --rc geninfo_unexecuted_blocks=1 00:33:57.995 00:33:57.995 ' 00:33:57.995 18:03:47 -- common/autotest_common.sh@1704 -- $ LCOV_OPTS=' 00:33:57.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:57.995 --rc genhtml_branch_coverage=1 00:33:57.995 --rc genhtml_function_coverage=1 00:33:57.995 --rc genhtml_legend=1 00:33:57.995 --rc geninfo_all_blocks=1 00:33:57.995 --rc geninfo_unexecuted_blocks=1 00:33:57.995 00:33:57.995 ' 00:33:57.995 18:03:47 -- common/autotest_common.sh@1705 -- $ export 'LCOV=lcov 00:33:57.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:57.995 --rc genhtml_branch_coverage=1 00:33:57.995 --rc genhtml_function_coverage=1 00:33:57.995 --rc genhtml_legend=1 00:33:57.995 --rc geninfo_all_blocks=1 00:33:57.995 --rc geninfo_unexecuted_blocks=1 00:33:57.995 00:33:57.995 ' 00:33:57.995 18:03:47 -- common/autotest_common.sh@1705 -- $ LCOV='lcov 00:33:57.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:57.995 --rc genhtml_branch_coverage=1 00:33:57.995 --rc genhtml_function_coverage=1 00:33:57.995 --rc genhtml_legend=1 00:33:57.995 --rc geninfo_all_blocks=1 00:33:57.995 --rc geninfo_unexecuted_blocks=1 00:33:57.995 00:33:57.995 ' 00:33:57.995 18:03:47 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:57.995 18:03:47 -- scripts/common.sh@15 -- $ shopt -s extglob 00:33:57.995 18:03:47 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:57.995 18:03:47 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:57.995 18:03:47 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:57.995 18:03:47 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.995 18:03:47 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.995 18:03:47 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.995 18:03:47 -- paths/export.sh@5 -- $ export PATH 00:33:57.995 18:03:47 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.995 18:03:47 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:33:57.995 18:03:47 -- common/autobuild_common.sh@486 -- $ date +%s 00:33:57.995 18:03:47 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728842627.XXXXXX 00:33:57.995 18:03:47 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728842627.bVNfO0 00:33:57.995 18:03:47 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:33:57.995 18:03:47 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:33:57.995 18:03:47 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:33:57.995 18:03:47 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:33:57.995 18:03:47 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:33:57.995 18:03:47 -- common/autobuild_common.sh@502 -- $ get_config_params 00:33:57.995 18:03:47 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:33:57.995 18:03:47 -- common/autotest_common.sh@10 -- $ set +x 00:33:57.995 18:03:47 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:33:57.995 18:03:47 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:33:57.995 18:03:47 -- pm/common@17 -- $ local monitor 00:33:57.995 18:03:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:57.995 18:03:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:57.995 18:03:47 -- pm/common@25 -- $ sleep 1 00:33:57.995 18:03:47 -- pm/common@21 -- $ date +%s 00:33:57.995 18:03:47 -- pm/common@21 -- $ date +%s 00:33:57.995 18:03:47 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728842627 00:33:57.995 18:03:47 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728842627 00:33:57.995 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728842627_collect-vmstat.pm.log 00:33:57.995 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728842627_collect-cpu-load.pm.log 00:33:58.939 18:03:48 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:33:58.939 18:03:48 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:33:58.939 18:03:48 -- spdk/autopackage.sh@14 -- $ timing_finish 00:33:58.939 18:03:48 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:58.939 18:03:48 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:33:58.939 18:03:48 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:58.939 18:03:48 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:58.939 18:03:48 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:58.939 18:03:48 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:58.939 18:03:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:58.939 18:03:48 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:33:58.939 18:03:48 -- pm/common@44 -- $ pid=86574 00:33:58.939 18:03:48 -- pm/common@50 -- $ kill -TERM 86574 00:33:58.939 18:03:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:58.939 18:03:48 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:33:58.939 18:03:48 -- pm/common@44 -- $ pid=86575 00:33:58.939 18:03:48 -- pm/common@50 -- $ kill -TERM 86575 00:33:58.939 + [[ -n 5028 ]] 00:33:58.939 + sudo kill 5028 00:33:58.950 [Pipeline] } 00:33:58.966 [Pipeline] // timeout 00:33:58.972 [Pipeline] } 00:33:58.987 [Pipeline] // stage 00:33:58.992 [Pipeline] } 00:33:59.008 [Pipeline] // catchError 00:33:59.018 [Pipeline] stage 00:33:59.020 [Pipeline] { (Stop VM) 00:33:59.033 [Pipeline] sh 00:33:59.365 + vagrant halt 00:34:02.667 ==> default: Halting domain... 00:34:07.972 [Pipeline] sh 00:34:08.256 + vagrant destroy -f 00:34:10.798 ==> default: Removing domain... 00:34:10.812 [Pipeline] sh 00:34:11.096 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:34:11.106 [Pipeline] } 00:34:11.121 [Pipeline] // stage 00:34:11.126 [Pipeline] } 00:34:11.141 [Pipeline] // dir 00:34:11.147 [Pipeline] } 00:34:11.161 [Pipeline] // wrap 00:34:11.168 [Pipeline] } 00:34:11.181 [Pipeline] // catchError 00:34:11.190 [Pipeline] stage 00:34:11.193 [Pipeline] { (Epilogue) 00:34:11.206 [Pipeline] sh 00:34:11.494 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:16.784 [Pipeline] catchError 00:34:16.786 [Pipeline] { 00:34:16.799 [Pipeline] sh 00:34:17.084 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:17.084 Artifacts sizes are good 00:34:17.094 [Pipeline] } 00:34:17.107 [Pipeline] // catchError 00:34:17.119 [Pipeline] archiveArtifacts 00:34:17.126 Archiving artifacts 00:34:17.240 [Pipeline] cleanWs 00:34:17.253 [WS-CLEANUP] Deleting project workspace... 00:34:17.253 [WS-CLEANUP] Deferred wipeout is used... 00:34:17.260 [WS-CLEANUP] done 00:34:17.262 [Pipeline] } 00:34:17.279 [Pipeline] // stage 00:34:17.284 [Pipeline] } 00:34:17.299 [Pipeline] // node 00:34:17.304 [Pipeline] End of Pipeline 00:34:17.349 Finished: SUCCESS